• So, "Tea" – the dating app that supposedly revolutionizes the way women connect – has found itself in a scandal reminiscent of your aunt’s infamous fruitcake recipe: a delightful mix of ingredients that just doesn’t belong together. Who knew that matchmaking could come with a side of “jlaajil”?

    Instead of finding Mr. Right, users are discovering a world where awkward encounters and unsolicited advice are the norm. Talk about a brewing disaster! It’s like they say: if the tea isn’t hot, don’t bother pouring it.

    Let’s raise a cup to the thrill of swiping right on chaos! Because nothing screams romance like a good old-fashioned scandal, right?

    #TeaApp #DatingDisasters #ModernRom
    So, "Tea" – the dating app that supposedly revolutionizes the way women connect – has found itself in a scandal reminiscent of your aunt’s infamous fruitcake recipe: a delightful mix of ingredients that just doesn’t belong together. Who knew that matchmaking could come with a side of “jlaajil”? Instead of finding Mr. Right, users are discovering a world where awkward encounters and unsolicited advice are the norm. Talk about a brewing disaster! It’s like they say: if the tea isn’t hot, don’t bother pouring it. Let’s raise a cup to the thrill of swiping right on chaos! Because nothing screams romance like a good old-fashioned scandal, right? #TeaApp #DatingDisasters #ModernRom
    ARABHARDWARE.NET
    Tea: تطبيق المواعدة النسوي الأشهر والفضيحة ذات "الجلاجل"!
    The post Tea: تطبيق المواعدة النسوي الأشهر والفضيحة ذات "الجلاجل"! appeared first on عرب هاردوير.
    Like
    Love
    Wow
    Angry
    Sad
    202
    1 Comentários 0 Compartilhamentos 0 Anterior
  • When you think about horror films, what comes to mind? Creepy monsters? Jump scares? The classic trope of a group of friends who somehow forget that splitting up is a bad idea? Well, hold onto your popcorn, because the talented folks at ESMA are here to remind us that the only thing scarier than a killer lurking in the shadows is the idea of them trying to be funny while doing it.

    Enter "Claw," a short film that dares to blend the horror genre with a sprinkle of humor – because who wouldn't want to laugh while being chased by a guy with a chainsaw? This cinematic masterpiece, which apparently took inspiration from the likes of "Last Action Hero," is like if a horror movie and a stand-up comedian had a baby, and we’re all just waiting for the punchline as we hide behind our couches.

    Imagine a young cinephile named Andrew, who is living his best life by binge-watching horror classics. However, instead of the usual blood and guts, he encounters a version of horror that leaves you both terrified and chuckling nervously. It’s like the directors at ESMA sat down and said, “Why not take everything that terrifies us and add a dash of quirky humor?” Honestly, it’s a wonder they didn’t throw in a musical number.

    Sure, we all adore the suspense that makes our hearts race, but the thought of Andrew laughing nervously at a killer with a penchant for puns? Now that’s a new level of fear. Who knew that horror could provide comic relief while simultaneously making us question our life choices? Forget battling your demons; let’s just joke about them instead! And if you think about it, that’s probably the best coping mechanism we’ve got.

    But beware! As you dive into this horror-comedy concoction, you might just find yourself chuckling at the most inappropriate moments. Like when the killer slips on a banana peel right before going for the kill – because nothing says “I’m terrified” like a comedy skit in a death scene. After all, isn’t that the essence of horror? To laugh in the face of danger, even if it’s through the lens of ESMA’s latest cinematic exploration?

    So, if you’re looking for a good time that sends shivers down your spine while keeping you in stitches, “Claw” is your go-to film. Just remember to keep a straight face when explaining to your friends why you’re laughing while watching someone get chased by a masked figure. But hey, in the world of horror, even the scariest movies can have a light-hearted twist – because why not?

    Embrace the terror, welcome the humor, and prepare yourself for a rollercoaster of emotions with "Claw." After all, if we can’t laugh at our fears, what’s the point?

    #ClawFilm #HorrorComedy #ESMA #CinematicHumor #HorrorMovies
    When you think about horror films, what comes to mind? Creepy monsters? Jump scares? The classic trope of a group of friends who somehow forget that splitting up is a bad idea? Well, hold onto your popcorn, because the talented folks at ESMA are here to remind us that the only thing scarier than a killer lurking in the shadows is the idea of them trying to be funny while doing it. Enter "Claw," a short film that dares to blend the horror genre with a sprinkle of humor – because who wouldn't want to laugh while being chased by a guy with a chainsaw? This cinematic masterpiece, which apparently took inspiration from the likes of "Last Action Hero," is like if a horror movie and a stand-up comedian had a baby, and we’re all just waiting for the punchline as we hide behind our couches. Imagine a young cinephile named Andrew, who is living his best life by binge-watching horror classics. However, instead of the usual blood and guts, he encounters a version of horror that leaves you both terrified and chuckling nervously. It’s like the directors at ESMA sat down and said, “Why not take everything that terrifies us and add a dash of quirky humor?” Honestly, it’s a wonder they didn’t throw in a musical number. Sure, we all adore the suspense that makes our hearts race, but the thought of Andrew laughing nervously at a killer with a penchant for puns? Now that’s a new level of fear. Who knew that horror could provide comic relief while simultaneously making us question our life choices? Forget battling your demons; let’s just joke about them instead! And if you think about it, that’s probably the best coping mechanism we’ve got. But beware! As you dive into this horror-comedy concoction, you might just find yourself chuckling at the most inappropriate moments. Like when the killer slips on a banana peel right before going for the kill – because nothing says “I’m terrified” like a comedy skit in a death scene. After all, isn’t that the essence of horror? To laugh in the face of danger, even if it’s through the lens of ESMA’s latest cinematic exploration? So, if you’re looking for a good time that sends shivers down your spine while keeping you in stitches, “Claw” is your go-to film. Just remember to keep a straight face when explaining to your friends why you’re laughing while watching someone get chased by a masked figure. But hey, in the world of horror, even the scariest movies can have a light-hearted twist – because why not? Embrace the terror, welcome the humor, and prepare yourself for a rollercoaster of emotions with "Claw." After all, if we can’t laugh at our fears, what’s the point? #ClawFilm #HorrorComedy #ESMA #CinematicHumor #HorrorMovies
    L’ESMA détourne les clichés des films d’horreurs : tremblez !
    Découvrez Claw, un court de fin d’études de l’ESMA qui s’inspire des codes des films d’horreur pour en proposer une version revisitée. A partir d’un concept qui rappelle Last Action Hero, l’équipe a concocté un fil
    Like
    Love
    Wow
    Sad
    Angry
    636
    1 Comentários 0 Compartilhamentos 0 Anterior
  • This Week's Tips For Helldivers 2, Monster Hunter Wilds, Oblivion Remastered, And More

    Start SlideshowStart SlideshowImage: The Pokémon Company, Arrowhead Game Studios, Blizzard, The Pokémon Company, Screenshot: Capcom / Samuel Moreno / Kotaku, Bethesda / Brandon Morgan / Kotaku, Nintendo, Bethesda / Brandon Morgan / Kotaku, Capcom / Samuel Moreno / KotakuYou know what we all need sometimes? A little advice. How do I plan for a future that’s so uncertain? Will AI take my job? If I go back to school and use AI to cheat, will I graduate and work for an AI boss? We can’t help you with any of that. But what we can do is provide some tips for Helldivers 2, Monster Hunter Wilds, Oblivion Remastered, and other great games. So, read on for that stuff, and maybe ask ChatGPT about those other things.Previous SlideNext SlideList slidesDon’t Rely On Ex Pokémon In Pokémon TCG Pocket AnymoreImage: The Pokémon CompanyDuring the initial months of Pokémon TCG Pocket, ex monsters dominated the competitive landscape. These monsters arestronger than their non-ex counterparts, and they can come with game-changing abilities that determine how your entire deck plays. In the past, players could create frustratingly fearsome decks consisting of two ex Pokémon supported by trainer and item cards. However, unless you pair together very specific ex Pokémon, you’ll now find yourself losing nearly every game you play. - Timothy Monbleau Read MorePrevious SlideNext SlideList slidesPlease, For The Love Of God, Defeat All Illuminate Stingrays In Helldivers 2Image: Arrowhead Game StudiosYou know what? Screw the Illuminate. I played round after round trying to get the Stingrays, also known as an Interloper, to spawn at least once, and those damn Overseers and Harvesters kept walking up and rocking me. In the end, I was victorious. A Stingray approached the airspace with reckless abandon, swooping in with practiced ease as it unloaded a barrage of molten death beams upon my head, and you know what happened? I died. A few times. But eventually, I managed to pop a shot off and I quickly discovered how to defeat Illuminate Stingrays in Helldivers 2. - Brandon Morgan Read MorePrevious SlideNext SlideList slidesDefeating Monster Hunter Wilds’ Demi Elder Dragon Might Be The Game’s Hardest Challenge So FarScreenshot: Capcom / Samuel Moreno / KotakuAlthough Zoh Shia is the thematic boss of Monster Hunter Wilds, other beasts can put up a tougher fight. Gore Magalaare easily in contention for being the most deadly enemies in the game. Not much is more threatening than their high mobility, powerful attacks, and unique Frenzy ailment that forms the basis for your Corrupted Mantle. - Samuel Moreno Read MorePrevious SlideNext SlideList slidesDon’t Forget To Play ‘The Shivering Isles’ Expansion In Oblivion RemasteredScreenshot: Bethesda / Brandon Morgan / KotakuWhether you’ve played the original Oblivion or not, chances are you’ve heard tales of the oddities awaiting you in the Shivering Isles. This expansion—the largest one for the open-world RPG—features a land of madness under the unyielding control of Sheogorath. It’s a beautiful world, yet so immensely wrong. But that’s why this DLC is one of the best in the franchise, so no matter how many hours you may have already put into the main story and the main world, you don’t want to miss this expansion. - Brandon Morgan Read MorePrevious SlideNext SlideList slidesHow Long Of A Ride Is Mario Kart World?Screenshot: NintendoThe Mario Kart franchise has been entertaining us all for decades—even with sibling fights and fits of rage over losing a race from a blue shell at the last second—but Mario Kart World is the first game to go open world. There hasn’t been a truly new entry in the series since 2014's Mario Kart 8, so being stoked to dive into this exciting adventure is perfectly reasonable. Equally reasonable, especially given the game’s controversial price tag, is to wonder how long it’ll take to beat and what type of replayability it offers. Let’s talk about it. - Billy Givens Read MorePrevious SlideNext SlideList slidesMario Kart World Players Are Exploiting Free Roam To Quickly Farm CoinsGif: Nintendo / FannaWuck / KotakuMario Kart World is full of cool stunts and lots of things to unlock, like new characters, costumes, and vehicles. The last of those requires accumulating a certain number of coins during your time with the Switch 2 exclusive, and while you could do that the normal way by just playing tons of races, you can also use the latest entry’s open world to farm coins faster or even while being completely AFK. - Ethan Gach Read MorePrevious SlideNext SlideList slidesOblivion Remastered’s Best Side Quest Is A World Within A WorldScreenshot: Bethesda / Brandon Morgan / KotakuIt’s been a long time since I kept a spreadsheet for a video game, or even notes beyond what I need for work. I had one for the original Oblivion run back in my school days. Back then, I knew where to find every side quest in the game. There were over 250. Still are, but now they’re enhanced, beautified for the modern gamer. One side quest retains its crown as the best, despite the game’s age. “A Brush With Death” is Oblivion Remastered’s best side quest by far, and here’s how to find and beat it! - Brandon Morgan Read MorePrevious SlideNext SlideList slidesDiablo IV: How To Power Level Your Way To Season 8's EndgameImage: BlizzardWhether you’re running a new build, trying out a new class, or returning to Diablo IV after an extended break,Whatever the case, learning how to level up fast in Diablo IV should help you check out everything new this season, along with hitting endgame so that your friends don’t cruelly make fun of you! - Brandon Morgan Read MorePrevious SlideNext SlideList slidesThe 5 Strongest Non-Ex Pokémon To Use In Pokémon TCG PocketImage: The Pokémon CompanyIt’s official: ex Pokémon no longer rule unchallenged Pokémon TCG Pocket. While these powerful cards are still prevalent in the competitive landscape, the rise of ex-specific counters have made many of these monsters risky to bring. It’s never been more vital to find strong Pokémon that are unburdened by the ex label, but who should you use? - Timothy Monbleau Read MorePrevious SlideNext SlideList slidesSome Of The Coolest Monster Hunter Wilds Armor Can Be Yours If You Collect Enough CoinsScreenshot: Capcom / Samuel Moreno / KotakuIt goes without saying that Monster Hunter Wilds has a lot of equipment materials to keep track of. The Title 1 Update increased the amount with the likes of Mizutsune parts and the somewhat obscurely named Pinnacle Coins. While it’s easy to know what the monster parts can be used for, the same can’t be said for a coin. Making things more complicated is that the related equipment isn’t unlocked all at once. - Samuel Moreno Read More
    #this #week039s #tips #helldivers #monster
    This Week's Tips For Helldivers 2, Monster Hunter Wilds, Oblivion Remastered, And More
    Start SlideshowStart SlideshowImage: The Pokémon Company, Arrowhead Game Studios, Blizzard, The Pokémon Company, Screenshot: Capcom / Samuel Moreno / Kotaku, Bethesda / Brandon Morgan / Kotaku, Nintendo, Bethesda / Brandon Morgan / Kotaku, Capcom / Samuel Moreno / KotakuYou know what we all need sometimes? A little advice. How do I plan for a future that’s so uncertain? Will AI take my job? If I go back to school and use AI to cheat, will I graduate and work for an AI boss? We can’t help you with any of that. But what we can do is provide some tips for Helldivers 2, Monster Hunter Wilds, Oblivion Remastered, and other great games. So, read on for that stuff, and maybe ask ChatGPT about those other things.Previous SlideNext SlideList slidesDon’t Rely On Ex Pokémon In Pokémon TCG Pocket AnymoreImage: The Pokémon CompanyDuring the initial months of Pokémon TCG Pocket, ex monsters dominated the competitive landscape. These monsters arestronger than their non-ex counterparts, and they can come with game-changing abilities that determine how your entire deck plays. In the past, players could create frustratingly fearsome decks consisting of two ex Pokémon supported by trainer and item cards. However, unless you pair together very specific ex Pokémon, you’ll now find yourself losing nearly every game you play. - Timothy Monbleau Read MorePrevious SlideNext SlideList slidesPlease, For The Love Of God, Defeat All Illuminate Stingrays In Helldivers 2Image: Arrowhead Game StudiosYou know what? Screw the Illuminate. I played round after round trying to get the Stingrays, also known as an Interloper, to spawn at least once, and those damn Overseers and Harvesters kept walking up and rocking me. In the end, I was victorious. A Stingray approached the airspace with reckless abandon, swooping in with practiced ease as it unloaded a barrage of molten death beams upon my head, and you know what happened? I died. A few times. But eventually, I managed to pop a shot off and I quickly discovered how to defeat Illuminate Stingrays in Helldivers 2. - Brandon Morgan Read MorePrevious SlideNext SlideList slidesDefeating Monster Hunter Wilds’ Demi Elder Dragon Might Be The Game’s Hardest Challenge So FarScreenshot: Capcom / Samuel Moreno / KotakuAlthough Zoh Shia is the thematic boss of Monster Hunter Wilds, other beasts can put up a tougher fight. Gore Magalaare easily in contention for being the most deadly enemies in the game. Not much is more threatening than their high mobility, powerful attacks, and unique Frenzy ailment that forms the basis for your Corrupted Mantle. - Samuel Moreno Read MorePrevious SlideNext SlideList slidesDon’t Forget To Play ‘The Shivering Isles’ Expansion In Oblivion RemasteredScreenshot: Bethesda / Brandon Morgan / KotakuWhether you’ve played the original Oblivion or not, chances are you’ve heard tales of the oddities awaiting you in the Shivering Isles. This expansion—the largest one for the open-world RPG—features a land of madness under the unyielding control of Sheogorath. It’s a beautiful world, yet so immensely wrong. But that’s why this DLC is one of the best in the franchise, so no matter how many hours you may have already put into the main story and the main world, you don’t want to miss this expansion. - Brandon Morgan Read MorePrevious SlideNext SlideList slidesHow Long Of A Ride Is Mario Kart World?Screenshot: NintendoThe Mario Kart franchise has been entertaining us all for decades—even with sibling fights and fits of rage over losing a race from a blue shell at the last second—but Mario Kart World is the first game to go open world. There hasn’t been a truly new entry in the series since 2014's Mario Kart 8, so being stoked to dive into this exciting adventure is perfectly reasonable. Equally reasonable, especially given the game’s controversial price tag, is to wonder how long it’ll take to beat and what type of replayability it offers. Let’s talk about it. - Billy Givens Read MorePrevious SlideNext SlideList slidesMario Kart World Players Are Exploiting Free Roam To Quickly Farm CoinsGif: Nintendo / FannaWuck / KotakuMario Kart World is full of cool stunts and lots of things to unlock, like new characters, costumes, and vehicles. The last of those requires accumulating a certain number of coins during your time with the Switch 2 exclusive, and while you could do that the normal way by just playing tons of races, you can also use the latest entry’s open world to farm coins faster or even while being completely AFK. - Ethan Gach Read MorePrevious SlideNext SlideList slidesOblivion Remastered’s Best Side Quest Is A World Within A WorldScreenshot: Bethesda / Brandon Morgan / KotakuIt’s been a long time since I kept a spreadsheet for a video game, or even notes beyond what I need for work. I had one for the original Oblivion run back in my school days. Back then, I knew where to find every side quest in the game. There were over 250. Still are, but now they’re enhanced, beautified for the modern gamer. One side quest retains its crown as the best, despite the game’s age. “A Brush With Death” is Oblivion Remastered’s best side quest by far, and here’s how to find and beat it! - Brandon Morgan Read MorePrevious SlideNext SlideList slidesDiablo IV: How To Power Level Your Way To Season 8's EndgameImage: BlizzardWhether you’re running a new build, trying out a new class, or returning to Diablo IV after an extended break,Whatever the case, learning how to level up fast in Diablo IV should help you check out everything new this season, along with hitting endgame so that your friends don’t cruelly make fun of you! - Brandon Morgan Read MorePrevious SlideNext SlideList slidesThe 5 Strongest Non-Ex Pokémon To Use In Pokémon TCG PocketImage: The Pokémon CompanyIt’s official: ex Pokémon no longer rule unchallenged Pokémon TCG Pocket. While these powerful cards are still prevalent in the competitive landscape, the rise of ex-specific counters have made many of these monsters risky to bring. It’s never been more vital to find strong Pokémon that are unburdened by the ex label, but who should you use? - Timothy Monbleau Read MorePrevious SlideNext SlideList slidesSome Of The Coolest Monster Hunter Wilds Armor Can Be Yours If You Collect Enough CoinsScreenshot: Capcom / Samuel Moreno / KotakuIt goes without saying that Monster Hunter Wilds has a lot of equipment materials to keep track of. The Title 1 Update increased the amount with the likes of Mizutsune parts and the somewhat obscurely named Pinnacle Coins. While it’s easy to know what the monster parts can be used for, the same can’t be said for a coin. Making things more complicated is that the related equipment isn’t unlocked all at once. - Samuel Moreno Read More #this #week039s #tips #helldivers #monster
    KOTAKU.COM
    This Week's Tips For Helldivers 2, Monster Hunter Wilds, Oblivion Remastered, And More
    Start SlideshowStart SlideshowImage: The Pokémon Company, Arrowhead Game Studios, Blizzard, The Pokémon Company, Screenshot: Capcom / Samuel Moreno / Kotaku, Bethesda / Brandon Morgan / Kotaku, Nintendo, Bethesda / Brandon Morgan / Kotaku, Capcom / Samuel Moreno / KotakuYou know what we all need sometimes? A little advice. How do I plan for a future that’s so uncertain? Will AI take my job? If I go back to school and use AI to cheat, will I graduate and work for an AI boss? We can’t help you with any of that. But what we can do is provide some tips for Helldivers 2, Monster Hunter Wilds, Oblivion Remastered, and other great games. So, read on for that stuff, and maybe ask ChatGPT about those other things.Previous SlideNext SlideList slidesDon’t Rely On Ex Pokémon In Pokémon TCG Pocket AnymoreImage: The Pokémon CompanyDuring the initial months of Pokémon TCG Pocket, ex monsters dominated the competitive landscape. These monsters are (usually) stronger than their non-ex counterparts, and they can come with game-changing abilities that determine how your entire deck plays. In the past, players could create frustratingly fearsome decks consisting of two ex Pokémon supported by trainer and item cards. However, unless you pair together very specific ex Pokémon, you’ll now find yourself losing nearly every game you play. - Timothy Monbleau Read MorePrevious SlideNext SlideList slidesPlease, For The Love Of God, Defeat All Illuminate Stingrays In Helldivers 2Image: Arrowhead Game StudiosYou know what? Screw the Illuminate. I played round after round trying to get the Stingrays, also known as an Interloper, to spawn at least once, and those damn Overseers and Harvesters kept walking up and rocking me. In the end, I was victorious. A Stingray approached the airspace with reckless abandon, swooping in with practiced ease as it unloaded a barrage of molten death beams upon my head, and you know what happened? I died. A few times. But eventually, I managed to pop a shot off and I quickly discovered how to defeat Illuminate Stingrays in Helldivers 2. - Brandon Morgan Read MorePrevious SlideNext SlideList slidesDefeating Monster Hunter Wilds’ Demi Elder Dragon Might Be The Game’s Hardest Challenge So FarScreenshot: Capcom / Samuel Moreno / KotakuAlthough Zoh Shia is the thematic boss of Monster Hunter Wilds, other beasts can put up a tougher fight. Gore Magala (and especially its Tempered version) are easily in contention for being the most deadly enemies in the game. Not much is more threatening than their high mobility, powerful attacks, and unique Frenzy ailment that forms the basis for your Corrupted Mantle. - Samuel Moreno Read MorePrevious SlideNext SlideList slidesDon’t Forget To Play ‘The Shivering Isles’ Expansion In Oblivion RemasteredScreenshot: Bethesda / Brandon Morgan / KotakuWhether you’ve played the original Oblivion or not, chances are you’ve heard tales of the oddities awaiting you in the Shivering Isles. This expansion—the largest one for the open-world RPG—features a land of madness under the unyielding control of Sheogorath. It’s a beautiful world, yet so immensely wrong. But that’s why this DLC is one of the best in the franchise, so no matter how many hours you may have already put into the main story and the main world, you don’t want to miss this expansion. - Brandon Morgan Read MorePrevious SlideNext SlideList slidesHow Long Of A Ride Is Mario Kart World?Screenshot: NintendoThe Mario Kart franchise has been entertaining us all for decades—even with sibling fights and fits of rage over losing a race from a blue shell at the last second—but Mario Kart World is the first game to go open world. There hasn’t been a truly new entry in the series since 2014's Mario Kart 8, so being stoked to dive into this exciting adventure is perfectly reasonable. Equally reasonable, especially given the game’s controversial price tag, is to wonder how long it’ll take to beat and what type of replayability it offers. Let’s talk about it. - Billy Givens Read MorePrevious SlideNext SlideList slidesMario Kart World Players Are Exploiting Free Roam To Quickly Farm CoinsGif: Nintendo / FannaWuck / KotakuMario Kart World is full of cool stunts and lots of things to unlock, like new characters, costumes, and vehicles. The last of those requires accumulating a certain number of coins during your time with the Switch 2 exclusive, and while you could do that the normal way by just playing tons of races, you can also use the latest entry’s open world to farm coins faster or even while being completely AFK. - Ethan Gach Read MorePrevious SlideNext SlideList slidesOblivion Remastered’s Best Side Quest Is A World Within A WorldScreenshot: Bethesda / Brandon Morgan / KotakuIt’s been a long time since I kept a spreadsheet for a video game, or even notes beyond what I need for work. I had one for the original Oblivion run back in my school days. Back then, I knew where to find every side quest in the game. There were over 250. Still are, but now they’re enhanced, beautified for the modern gamer. One side quest retains its crown as the best, despite the game’s age. “A Brush With Death” is Oblivion Remastered’s best side quest by far, and here’s how to find and beat it! - Brandon Morgan Read MorePrevious SlideNext SlideList slidesDiablo IV: How To Power Level Your Way To Season 8's EndgameImage: BlizzardWhether you’re running a new build, trying out a new class, or returning to Diablo IV after an extended break, (a break in which you were likely playing Path of Exile 2, right? I know I wasn’t alone in farming Exalted Orbs!) Whatever the case, learning how to level up fast in Diablo IV should help you check out everything new this season, along with hitting endgame so that your friends don’t cruelly make fun of you! - Brandon Morgan Read MorePrevious SlideNext SlideList slidesThe 5 Strongest Non-Ex Pokémon To Use In Pokémon TCG PocketImage: The Pokémon CompanyIt’s official: ex Pokémon no longer rule unchallenged Pokémon TCG Pocket. While these powerful cards are still prevalent in the competitive landscape, the rise of ex-specific counters have made many of these monsters risky to bring. It’s never been more vital to find strong Pokémon that are unburdened by the ex label, but who should you use? - Timothy Monbleau Read MorePrevious SlideNext SlideList slidesSome Of The Coolest Monster Hunter Wilds Armor Can Be Yours If You Collect Enough CoinsScreenshot: Capcom / Samuel Moreno / KotakuIt goes without saying that Monster Hunter Wilds has a lot of equipment materials to keep track of. The Title 1 Update increased the amount with the likes of Mizutsune parts and the somewhat obscurely named Pinnacle Coins. While it’s easy to know what the monster parts can be used for, the same can’t be said for a coin. Making things more complicated is that the related equipment isn’t unlocked all at once. - Samuel Moreno Read More
    Like
    Love
    Wow
    Sad
    Angry
    391
    0 Comentários 0 Compartilhamentos 0 Anterior
  • New Court Order in Stratasys v. Bambu Lab Lawsuit

    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit. 
    Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG. 
    Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299.
    On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward.
    Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases.
    This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits. 
    The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year. 
    Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.       
    A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    Another twist in the Stratasys v. Bambu Lab lawsuit 
    Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities.
    Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers.
    Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice.
    It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab. 
    Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022.
    In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas.
    Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.   
    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.  
    In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party.
    Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment.
    The Bambu Lab X1C 3D printer. Image via Bambu Lab.
    3D printing patent battles 
    The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit. 
    The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent.
    Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.  
    The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs. 
    In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.  
    San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer.
    3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets.
    Who won the 2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    #new #court #order #stratasys #bambu
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299. On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. #new #court #order #stratasys #bambu
    3DPRINTINGINDUSTRY.COM
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member cases (2:24-CV-00644-JRG and 2:24-CV-00645-JRG) into a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299(a). On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299(a). The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12(b)(7). Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry.
    Like
    Love
    Wow
    Sad
    Angry
    522
    2 Comentários 0 Compartilhamentos 0 Anterior
  • AU Deals: Today's Hottest AAA Discounts to Heat Up Your Game Cave Winter Hibernation

    Winter is well and truly biting, but this fresh crop of game deals is bringing the heat. From mythological mayhem to pocket-sized platformers, there’s something here for every taste and timeframe. If your digital shelf could use a mid-year injection of chaos, charm, or challenge, this week’s offerings are primed to please.This Day in Gaming In retro news, I’m lighting a 26‑candle cake for Silent Hill, the fog‑laden survival horror fest that kept '99-era me perched on a seat with barely 2% of the surface area of one butt cheek. I still remember tentatively sweeping my flashlight across those grainy, polygonal streets, only to have the beam half illuminate some scurrying something in the dark.
    Though the OG Resident Evil certainly vexed me first, the unique magic of Silent Hill lay in how its graphical limitations—thick fog and encroaching darkness—became tools of terror rather than platform limitations. Every ring of static from your radio or *that* air raid siren heralding the "other plane" of this madhouse could ratchet up the dread in an instant. Lastly, I recall working game retail at launch and having to help absolutely bloody everybody with a solution to the piano puzzle.Tank controls andbugger all visibility. OG Silent Hill was terrifying.Aussie bdays for notable games- Silent Hill1999. Redux- Marvel vs. Capcom 22000. Redux- The Conduit2009. eBay- Monster Hunter Generations2016. eBayContentsNice Savings for Nintendo SwitchAvailable now!Nintendo Switch 2 ConsoleNintendo Switch 2 + Mario Kart WorldNintendo kicks things off with Persona 5 Royal for Aa lavishly expanded edition of the genre-defining RPG whose original director Katsura Hashino was inspired by Carl Jung’s theories of the psyche. Also worth nabbing is Bravely Default II at Aa spiritual twinner to the Final Fantasy titles that’s cheekily packed with nostalgic mechanics like turning off random encounters to power-level in peace.Persona 5 Royal- ABravely Default II- ASonic Frontiers- ASonic x Shadow Generations- ANBA 2K25- AMetal Gear Col.- AExpiring Recent DealsOr gift a Nintendo eShop Card.Switch Console PricesHow much to Switch it up?Switch OLED + Mario Wonder: $̶5̶3̶9̶ |
    Switch Original: $̶4̶9̶9̶ |
    Switch OLED Black: $̶5̶3̶9̶ |
    Switch OLED White: $̶5̶3̶9̶ ♥ |
    Switch Lite: $̶3̶2̶9̶ |
    Switch Lite Hyrule: $̶3̶3̶9̶ See itBack to topExciting Bargains for Xbox Over on Xbox Series X, Warhammer 40,000: Space Marine 2 is slashing skulls and prices at Afinally giving fans the long-awaited sequel to one of gaming’s most satisfyingly weighty shooters. Suicide Squad: Kill the Justice League is an outrageous Aand despite its rocky reception, it’s a fascinating look at how Batman: Arkham devs tried to blend looter-shooter DNA into their universe.40K Space Marine 2- ASuicide Squad: KTJL- AWild Hearts- AAvatar: Pandora Gold Ed.- AHogwarts Legacy- AXbox OneTopSpin 2K25- ASunset Overdrive- AAlan Wake Rem.- AExpiring Recent DealsThe Witcher 3 Comp.- ATekken 8- ANBA 2K25- AFarming Simulator 25- AFC 25- ARed Dead Redemption 2- ALies of P- ALego Jurassic World- AOr just invest in an Xbox Card.Xbox Console PricesHow many bucks for a 'Box? Series X: $̶7̶9̶9̶ |
    Series S Black: $̶5̶4̶9̶ |
    Series S White:$̶4̶9̶9̶ |
    Series S Starter: N/ASee itBack to topPure Scores for PlayStationFor PS5 players, Marvel’s Spider-Man: Miles Morales swings down to Aletting you sling through Harlem while wearing everything from a Bodega Cat suit to a Spider-Verse frame-rate filter. Meanwhile, Ratchet & Clank: Rift Apart for Ais a tech marvel that started life as a PS4 title, before being fully rebuilt to show off the PS5’s SSD.PS4God of War Ragnarök- AGran Turismo 7- AWatch Dogs: Legion- AExpiring Recent DealsPS+ Monthly FreebiesYours to keep from May 1 with this subscriptionArk: Survival AscendedBalatroWarhammer 40,000: BoltgunOr purchase a PS Store Card.What you'll pay to 'Station.PS5 + Astro Bot:$̶7̶2̶4̶.9̶5̶ |
    PS5 Slim Disc:$̶7̶9̶9̶ |
    PS5 Slim Digital:6̶7̶9̶ |
    PS5 Pro $̶1̶,1̶9̶9̶ |
    PS VR2: |
    PS VR2 + Horizon: |
    PS Portal: See itBack to topPurchase Cheap for PCOn PC, Resident Evil 4 is a steal at Aa stunning remake where the developers added extra charm to Leon’s famous “Where’s everyone going, bingo?” line by letting players unlock vintage filters that emulate 2005-era graphics. Also notable is Lies of P at Athe Pinocchio-meets-Bloodborne mash-up that lets you lie in dialogue choices for combat perks.Lies of P- AThe Alters- AClair Obscur: Expedition 33- ASilent Hill 2- AForza Horizon 5- AResident Evil 4- AExpiring Recent DealsOr just get a Steam Wallet CardPC Hardware PricesSlay your pile of shame.Official launch in NovSteam Deck 256GB LCD: |
    Steam Deck 512GB OLED: |
    Steam Deck 1TB OLED: See it at SteamLaptop DealsDesktop DealsLenovo neo 50a G5 27" AIO– ALenovo neo 50q G4 Tiny– ALenovo neo 50t G5 Tower– ALegion Tower 5i G8– AMonitor DealsSamsung QE50T 50"– AARZOPA 16.1" 144Hz– AZ-Edge 27" 240Hz– AGawfolk 34" WQHD– ALG 27" Ultragear– AComponent DealsStorage DealsBack to topLegit LEGO DealsExpiring Recent DealsBack to topHot Headphones DealsAudiophilia for lessBose QuietComfort Ultra Wireless– ASoundcore by Anker Q20i– ASony MDR7506 Professional– ATechnics Premium– ABose SoundLink Flex– AJBL Charge 5 - Portable Speaker– AJBL Flip Essential 2 Waterproof Speaker– ASony SRS-XB100 Travel Speaker– AUltimate Ears Boom 3 Portable Speaker– ASamsung Galaxy Buds2 Pro– ASennheiser Momentum 4 Wireless– ABack to topTerrific TV DealsDo right by your console, upgrade your tellyLG 43" UT80 4K– AKogan 65" QLED 4K– AKogan 55" QLED 4K– ALG 55" UT80 4K– APrism+ Q75 Ultra 75" 4K QLED– AGaimoo Mini Projector 1080p w/ 4K– AGooDee 4K Projector– AVOPLLS Mini Projector 4K– AXuanPad Mini Projector– ALG S70TY Q Series Sound Barn*-22%) – ASony HTG700 Atmos Soundbar– AYamaha NS-SW050 Subwoofer– ASmart Home DealsBack to top Adam Mathew is our Aussie deals wrangler. He plays practically everything, often on YouTube.
    #deals #today039s #hottest #aaa #discounts
    AU Deals: Today's Hottest AAA Discounts to Heat Up Your Game Cave Winter Hibernation
    Winter is well and truly biting, but this fresh crop of game deals is bringing the heat. From mythological mayhem to pocket-sized platformers, there’s something here for every taste and timeframe. If your digital shelf could use a mid-year injection of chaos, charm, or challenge, this week’s offerings are primed to please.This Day in Gaming 🎂In retro news, I’m lighting a 26‑candle cake for Silent Hill, the fog‑laden survival horror fest that kept '99-era me perched on a seat with barely 2% of the surface area of one butt cheek. I still remember tentatively sweeping my flashlight across those grainy, polygonal streets, only to have the beam half illuminate some scurrying something in the dark. Though the OG Resident Evil certainly vexed me first, the unique magic of Silent Hill lay in how its graphical limitations—thick fog and encroaching darkness—became tools of terror rather than platform limitations. Every ring of static from your radio or *that* air raid siren heralding the "other plane" of this madhouse could ratchet up the dread in an instant. Lastly, I recall working game retail at launch and having to help absolutely bloody everybody with a solution to the piano puzzle.Tank controls andbugger all visibility. OG Silent Hill was terrifying.Aussie bdays for notable games- Silent Hill1999. Redux- Marvel vs. Capcom 22000. Redux- The Conduit2009. eBay- Monster Hunter Generations2016. eBayContentsNice Savings for Nintendo SwitchAvailable now!Nintendo Switch 2 ConsoleNintendo Switch 2 + Mario Kart WorldNintendo kicks things off with Persona 5 Royal for Aa lavishly expanded edition of the genre-defining RPG whose original director Katsura Hashino was inspired by Carl Jung’s theories of the psyche. Also worth nabbing is Bravely Default II at Aa spiritual twinner to the Final Fantasy titles that’s cheekily packed with nostalgic mechanics like turning off random encounters to power-level in peace.Persona 5 Royal- ABravely Default II- ASonic Frontiers- ASonic x Shadow Generations- ANBA 2K25- AMetal Gear Col.- AExpiring Recent DealsOr gift a Nintendo eShop Card.Switch Console PricesHow much to Switch it up?Switch OLED + Mario Wonder: $̶5̶3̶9̶ | Switch Original: $̶4̶9̶9̶ | Switch OLED Black: $̶5̶3̶9̶ | Switch OLED White: $̶5̶3̶9̶ ♥ | Switch Lite: $̶3̶2̶9̶ | Switch Lite Hyrule: $̶3̶3̶9̶ See itBack to topExciting Bargains for Xbox Over on Xbox Series X, Warhammer 40,000: Space Marine 2 is slashing skulls and prices at Afinally giving fans the long-awaited sequel to one of gaming’s most satisfyingly weighty shooters. Suicide Squad: Kill the Justice League is an outrageous Aand despite its rocky reception, it’s a fascinating look at how Batman: Arkham devs tried to blend looter-shooter DNA into their universe.40K Space Marine 2- ASuicide Squad: KTJL- AWild Hearts- AAvatar: Pandora Gold Ed.- AHogwarts Legacy- AXbox OneTopSpin 2K25- ASunset Overdrive- AAlan Wake Rem.- AExpiring Recent DealsThe Witcher 3 Comp.- ATekken 8- ANBA 2K25- AFarming Simulator 25- AFC 25- ARed Dead Redemption 2- ALies of P- ALego Jurassic World- AOr just invest in an Xbox Card.Xbox Console PricesHow many bucks for a 'Box? Series X: $̶7̶9̶9̶ 👑| Series S Black: $̶5̶4̶9̶ | Series S White:$̶4̶9̶9̶ | Series S Starter: N/ASee itBack to topPure Scores for PlayStationFor PS5 players, Marvel’s Spider-Man: Miles Morales swings down to Aletting you sling through Harlem while wearing everything from a Bodega Cat suit to a Spider-Verse frame-rate filter. Meanwhile, Ratchet & Clank: Rift Apart for Ais a tech marvel that started life as a PS4 title, before being fully rebuilt to show off the PS5’s SSD.PS4God of War Ragnarök- AGran Turismo 7- AWatch Dogs: Legion- AExpiring Recent DealsPS+ Monthly FreebiesYours to keep from May 1 with this subscriptionArk: Survival AscendedBalatroWarhammer 40,000: BoltgunOr purchase a PS Store Card.What you'll pay to 'Station.PS5 + Astro Bot:$̶7̶2̶4̶.9̶5̶ 👑 | PS5 Slim Disc:$̶7̶9̶9̶ | PS5 Slim Digital:6̶7̶9̶ | PS5 Pro $̶1̶,1̶9̶9̶ | PS VR2: | PS VR2 + Horizon: | PS Portal: See itBack to topPurchase Cheap for PCOn PC, Resident Evil 4 is a steal at Aa stunning remake where the developers added extra charm to Leon’s famous “Where’s everyone going, bingo?” line by letting players unlock vintage filters that emulate 2005-era graphics. Also notable is Lies of P at Athe Pinocchio-meets-Bloodborne mash-up that lets you lie in dialogue choices for combat perks.Lies of P- AThe Alters- AClair Obscur: Expedition 33- ASilent Hill 2- AForza Horizon 5- AResident Evil 4- AExpiring Recent DealsOr just get a Steam Wallet CardPC Hardware PricesSlay your pile of shame.Official launch in NovSteam Deck 256GB LCD: | Steam Deck 512GB OLED: | Steam Deck 1TB OLED: See it at SteamLaptop DealsDesktop DealsLenovo neo 50a G5 27" AIO– ALenovo neo 50q G4 Tiny– ALenovo neo 50t G5 Tower– ALegion Tower 5i G8– AMonitor DealsSamsung QE50T 50"– AARZOPA 16.1" 144Hz– AZ-Edge 27" 240Hz– AGawfolk 34" WQHD– ALG 27" Ultragear– AComponent DealsStorage DealsBack to topLegit LEGO DealsExpiring Recent DealsBack to topHot Headphones DealsAudiophilia for lessBose QuietComfort Ultra Wireless– ASoundcore by Anker Q20i– ASony MDR7506 Professional– ATechnics Premium– ABose SoundLink Flex– AJBL Charge 5 - Portable Speaker– AJBL Flip Essential 2 Waterproof Speaker– ASony SRS-XB100 Travel Speaker– AUltimate Ears Boom 3 Portable Speaker– ASamsung Galaxy Buds2 Pro– ASennheiser Momentum 4 Wireless– ABack to topTerrific TV DealsDo right by your console, upgrade your tellyLG 43" UT80 4K– AKogan 65" QLED 4K– AKogan 55" QLED 4K– ALG 55" UT80 4K– APrism+ Q75 Ultra 75" 4K QLED– AGaimoo Mini Projector 1080p w/ 4K– AGooDee 4K Projector– AVOPLLS Mini Projector 4K– AXuanPad Mini Projector– ALG S70TY Q Series Sound Barn*-22%) – ASony HTG700 Atmos Soundbar– AYamaha NS-SW050 Subwoofer– ASmart Home DealsBack to top Adam Mathew is our Aussie deals wrangler. He plays practically everything, often on YouTube. #deals #today039s #hottest #aaa #discounts
    WWW.IGN.COM
    AU Deals: Today's Hottest AAA Discounts to Heat Up Your Game Cave Winter Hibernation
    Winter is well and truly biting, but this fresh crop of game deals is bringing the heat. From mythological mayhem to pocket-sized platformers, there’s something here for every taste and timeframe. If your digital shelf could use a mid-year injection of chaos, charm, or challenge, this week’s offerings are primed to please.This Day in Gaming 🎂In retro news, I’m lighting a 26‑candle cake for Silent Hill, the fog‑laden survival horror fest that kept '99-era me perched on a seat with barely 2% of the surface area of one butt cheek. I still remember tentatively sweeping my flashlight across those grainy, polygonal streets, only to have the beam half illuminate some scurrying something in the dark. Though the OG Resident Evil certainly vexed me first, the unique magic of Silent Hill lay in how its graphical limitations—thick fog and encroaching darkness—became tools of terror rather than platform limitations. Every ring of static from your radio or *that* air raid siren heralding the "other plane" of this madhouse could ratchet up the dread in an instant. Lastly, I recall working game retail at launch and having to help absolutely bloody everybody with a solution to the piano puzzle.Tank controls and (hardware induced) bugger all visibility. OG Silent Hill was terrifying.Aussie bdays for notable games- Silent Hill (PS) 1999. Redux- Marvel vs. Capcom 2 (DC) 2000. Redux- The Conduit (Wii) 2009. eBay- Monster Hunter Generations (3DS) 2016. eBayContentsNice Savings for Nintendo SwitchAvailable now!Nintendo Switch 2 ConsoleNintendo Switch 2 + Mario Kart WorldNintendo kicks things off with Persona 5 Royal for A$66.60, a lavishly expanded edition of the genre-defining RPG whose original director Katsura Hashino was inspired by Carl Jung’s theories of the psyche. Also worth nabbing is Bravely Default II at A$63.10, a spiritual twinner to the Final Fantasy titles that’s cheekily packed with nostalgic mechanics like turning off random encounters to power-level in peace.Persona 5 Royal (-33%) - A$66.60Bravely Default II (-21%) - A$63.10Sonic Frontiers (-53%) - A$47Sonic x Shadow Generations (-35%) - A$49NBA 2K25 (-79%) - A$19Metal Gear Col. (-50%) - A$45Expiring Recent DealsOr gift a Nintendo eShop Card.Switch Console PricesHow much to Switch it up?Switch OLED + Mario Wonder: $̶5̶3̶9̶ $538 | Switch Original: $̶4̶9̶9̶ $448 | Switch OLED Black: $̶5̶3̶9̶ $469 | Switch OLED White: $̶5̶3̶9̶ $449 ♥ | Switch Lite: $̶3̶2̶9̶ $328 | Switch Lite Hyrule: $̶3̶3̶9̶ $335See itBack to topExciting Bargains for Xbox Over on Xbox Series X, Warhammer 40,000: Space Marine 2 is slashing skulls and prices at A$49.90, finally giving fans the long-awaited sequel to one of gaming’s most satisfyingly weighty shooters. Suicide Squad: Kill the Justice League is an outrageous A$9.90, and despite its rocky reception, it’s a fascinating look at how Batman: Arkham devs tried to blend looter-shooter DNA into their universe.40K Space Marine 2 (-54%) - A$49.90Suicide Squad: KTJL (-91%) - A$9.90Wild Hearts (-83%) - A$19Avatar: Pandora Gold Ed. (-69%) - A$49.90Hogwarts Legacy (-75%) - A$27.40Xbox OneTopSpin 2K25 (-88%) - A$14.90Sunset Overdrive (-36%) - A$19.20Alan Wake Rem. (-85%) - A$6.70Expiring Recent DealsThe Witcher 3 Comp. (-56%) - A$34.80Tekken 8 (-53%) - A$39.90NBA 2K25 (-80%) - A$24Farming Simulator 25 (-32%) - A$68FC 25 (-57%) - A$34Red Dead Redemption 2 (-78%) - A$20Lies of P (-19%) - A$73Lego Jurassic World (-65%) - A$22.50Or just invest in an Xbox Card.Xbox Console PricesHow many bucks for a 'Box? Series X: $̶7̶9̶9̶ $724 👑| Series S Black: $̶5̶4̶9̶ $545 | Series S White:$̶4̶9̶9̶ $498 | Series S Starter: N/ASee itBack to topPure Scores for PlayStationFor PS5 players, Marvel’s Spider-Man: Miles Morales swings down to A$39, letting you sling through Harlem while wearing everything from a Bodega Cat suit to a Spider-Verse frame-rate filter. Meanwhile, Ratchet & Clank: Rift Apart for A$54 is a tech marvel that started life as a PS4 title, before being fully rebuilt to show off the PS5’s SSD.PS4God of War Ragnarök (-60%) - A$44Gran Turismo 7 (-60%) - A$44Watch Dogs: Legion (-86%) - A$13.60Expiring Recent DealsPS+ Monthly FreebiesYours to keep from May 1 with this subscriptionArk: Survival Ascended (PS5)Balatro (PS5/PS4)Warhammer 40,000: Boltgun (PS5/PS4)Or purchase a PS Store Card.What you'll pay to 'Station.PS5 + Astro Bot:$̶7̶2̶4̶.9̶5̶ $699👑 | PS5 Slim Disc:$̶7̶9̶9̶ $625 | PS5 Slim Digital:6̶7̶9̶ $549 | PS5 Pro $̶1̶,1̶9̶9̶ $1,049 | PS VR2: $649.95 | PS VR2 + Horizon: $1,099 | PS Portal: $329See itBack to topPurchase Cheap for PCOn PC, Resident Evil 4 is a steal at A$29.90, a stunning remake where the developers added extra charm to Leon’s famous “Where’s everyone going, bingo?” line by letting players unlock vintage filters that emulate 2005-era graphics. Also notable is Lies of P at A$76.40, the Pinocchio-meets-Bloodborne mash-up that lets you lie in dialogue choices for combat perks.Lies of P (-15%) - A$76.40The Alters (-30%) - A$35.60Clair Obscur: Expedition 33 (-18%) - A$57.30Silent Hill 2 (-40%) - A$61.50Forza Horizon 5 (-65%) - A$31.40Resident Evil 4 (-50%) - A$29.90Expiring Recent DealsOr just get a Steam Wallet CardPC Hardware PricesSlay your pile of shame.Official launch in NovSteam Deck 256GB LCD: $649 | Steam Deck 512GB OLED: $899 | Steam Deck 1TB OLED: $1,049See it at SteamLaptop DealsDesktop DealsLenovo neo 50a G5 27" AIO (-47%) – A$1,379Lenovo neo 50q G4 Tiny (-35%) – A$639Lenovo neo 50t G5 Tower (-20%) – A$871.20Legion Tower 5i G8 (-29%) – A$1,899Monitor DealsSamsung QE50T 50" (-31%) – A$596ARZOPA 16.1" 144Hz (-55%) – A$159.99Z-Edge 27" 240Hz (-15%) – A$237.99Gawfolk 34" WQHD (-28%) – A$359LG 27" Ultragear (-42%) – A$349Component DealsStorage DealsBack to topLegit LEGO DealsExpiring Recent DealsBack to topHot Headphones DealsAudiophilia for lessBose QuietComfort Ultra Wireless (-38%) – A$399.95Soundcore by Anker Q20i (-43%) – A$68.79Sony MDR7506 Professional (-30%) – A$169Technics Premium (-46%) – A$299Bose SoundLink Flex (-31%) – A$171JBL Charge 5 - Portable Speaker (-28%) – A$144JBL Flip Essential 2 Waterproof Speaker (-26%) – A$96Sony SRS-XB100 Travel Speaker (-41%) – A$84.15Ultimate Ears Boom 3 Portable Speaker (-41%) – A$134.95Samsung Galaxy Buds2 Pro (-26%) – A$259.29Sennheiser Momentum 4 Wireless (-46%) – A$275Back to topTerrific TV DealsDo right by your console, upgrade your tellyLG 43" UT80 4K (-24%) – A$635Kogan 65" QLED 4K (-50%) – A$699Kogan 55" QLED 4K (-45%) – A$549LG 55" UT80 4K (-28%) – A$866Prism+ Q75 Ultra 75" 4K QLED (-47%) – A$1,229Gaimoo Mini Projector 1080p w/ 4K (-33%) – A$119.99GooDee 4K Projector (-58%) – A$169.99VOPLLS Mini Projector 4K (-19%) – A$168.99XuanPad Mini Projector (-36%) – A$128.99LG S70TY Q Series Sound Barn*-22%) – A$546Sony HTG700 Atmos Soundbar (-15%) – A$594Yamaha NS-SW050 Subwoofer (-13%) – A$270Smart Home DealsBack to top Adam Mathew is our Aussie deals wrangler. He plays practically everything, often on YouTube.
    Like
    Love
    Wow
    Sad
    Angry
    564
    0 Comentários 0 Compartilhamentos 0 Anterior
  • 15 riveting images from the 2025 UN World Oceans Day Photo Competition

    Big and Small Underwater Faces — 3rd Place.
    Trips to the Antarctic Peninsula always yield amazing encounters with leopard seals. Boldly approaching me and baring his teeth, this individual was keen to point out that this part of Antarctica was his territory. This picture was shot at dusk, resulting in the rather moody atmosphere.
     
    Credit: Lars von Ritter Zahony/ World Ocean’s Day

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    The striking eye of a humpback whale named Sweet Girl peers at the camera. Just four days later, she would be dead, hit by a speeding boat and one of the 20,000 whales killed by ship strikes each year. Photographer Rachel Moore’s captivating imageof Sweet Girl earned top honors at the 2025 United Nations World Oceans Day Photo Competition.
    Wonder: Sustaining What Sustains Us — WinnerThis photo, taken in Mo’orea, French Polynesia in 2024, captures the eye of a humpback whale named Sweet Girl, just days before her tragic death. Four days after I captured this intimate moment, she was struck and killed by a fast-moving ship. Her death serves as a heartbreaking reminder of the 20,000 whales lost to ship strikes every year. We are using her story to advocate for stronger protections, petitioning for stricter speed laws around Tahiti and Mo’orea during whale season. I hope Sweet Girl’s legacy will spark real change to protect these incredible animals and prevent further senseless loss.Credit: Rachel Moore/ United Nations World Oceans Day www.unworldoceansday.org
    Now in its twelfth year, the competition coordinated in collaboration between the UN Division for Ocean Affairs and the Law of the Sea, DivePhotoGuide, Oceanic Global, and  the Intergovernmental Oceanographic Commission of UNESCO. Each year, thousands of underwater photographers submit images that judges award prizes for across four categories: Big and Small Underwater Faces, Underwater Seascapes, Above Water Seascapes, and Wonder: Sustaining What Sustains Us.
    This year’s winning images include a curious leopard seal, a swarm of jellyfish, and a very grumpy looking Japanese warbonnet. Given our oceans’ perilous state, all competition participants were required to sign a charter of 14 commitments regarding ethics in photography.
    Underwater Seascapes — Honorable MentionWith only orcas as their natural predators, leopard seals are Antarctica’s most versatile hunters, preying on everything from fish and cephalopods to penguins and other seals. Gentoo penguins are a favored menu item, and leopard seals can be observed patrolling the waters around their colonies. For this shot, I used a split image to capture both worlds: the gentoo penguin colony in the background with the leopard seal on the hunt in the foreground.Credit: Lars von Ritter Zahony/ United Nations World Oceans Day www.unworldoceansday.org
    Above Water Seascapes – WinnerA serene lake cradled by arid dunes, where a gentle stream breathes life into the heart of Mother Earth’s creation: Captured from an airplane, this image reveals the powerful contrasts and hidden beauty where land and ocean meet, reminding us that the ocean is the source of all life and that everything in nature is deeply connected. The location is a remote stretch of coastline near Shark Bay, Western Australia.Credit: Leander Nardin/ United Nations World Oceans Day www.unworldoceansday.org
    Above Water Seascapes — 3rd PlaceParadise Harbour is one of the most beautiful places on the Antarctic Peninsula. When I visited, the sea was extremely calm, and I was lucky enough to witness a wonderfully clear reflection of the Suárez Glacierin the water. The only problem was the waves created by our speedboat, and the only way to capture the perfect reflection was to lie on the bottom of the boat while it moved towards the glacier.Credit: Andrey Nosik/ United Nations World Oceans Day www.unworldoceansday.org
    Underwater Seascapes — 3rd Place“La Rapadura” is a natural hidden treasure on the northern coast of Tenerife, in the Spanish territory of the Canary Islands. Only discovered in 1996, it is one of the most astonishing underwater landscapes in the world, consistently ranking among the planet’s best dive sites. These towering columns of basalt are the result of volcanic processes that occurred between 500,000 and a million years ago. The formation was created when a basaltic lava flow reached the ocean, where, upon cooling and solidifying, it contracted, creating natural structures often compared to the pipes of church organs. Located in a region where marine life has been impacted by once common illegal fishing practices, this stunning natural monument has both geological and ecological value, and scientists and underwater photographers are advocating for its protection.Credit: Pedro Carrillo/ United Nations World Oceans Day www.unworldoceansday.org
    Underwater Seascapes — WinnerThis year, I had the incredible opportunity to visit a jellyfish lake during a liveaboard trip around southern Raja Ampat, Indonesia. Being surrounded by millions of jellyfish, which have evolved to lose their stinging ability due to the absence of predators, was one of the most breathtaking experiences I’ve ever had.Credit: Dani Escayola/ United Nations World Oceans Day www.unworldoceansday.org
    Underwater Seascapes — 2nd PlaceThis shot captures a school of rays resting at a cleaning station in Mauritius, where strong currents once attracted them regularly. Some rays grew accustomed to divers, allowing close encounters like this. Sadly, after the severe bleaching that the reefs here suffered last year, such gatherings have become rare, and I fear I may not witness this again at the same spot.Credit: Gerald Rambert/ United Nations World Oceans Day www.unworldoceansday.org
    Wonder: Sustaining What Sustains Us — 3rd PlaceShot in Cuba’s Jardines de la Reina—a protected shark sanctuary—this image captures a Caribbean reef shark weaving through a group of silky sharks near the surface. Using a slow shutter and strobes as the shark pivoted sharply, the motion blurred into a wave-like arc across its head, lit by the golden hues of sunset. The abundance and behavior of sharks here is a living symbol of what protected oceans can look like.Credit: Steven Lopez/ United Nations World Oceans Day www.unworldoceansday.org
     Above Water Seascapes — 2nd PlaceNorthern gannetssoar above the dramatic cliffs of Scotland’s Hermaness National Nature Reserve, their sleek white bodies and black-tipped wings slicing through the Shetland winds. These seabirds, the largest in the North Atlantic, are renowned for their striking plunge-dives, reaching speeds up to 100 kphas they hunt for fish beneath the waves. The cliffs of Hermaness provide ideal nesting sites, with updrafts aiding their take-offs and landings. Each spring, thousands return to this rugged coastline, forming one of the UK’s most significant gannet colonies. It was a major challenge to take photos at the edge of these cliffs at almost 200 meterswith the winds up to 30 kph.Credit: Nur Tucker/ United Nations World Oceans Day www.unworldoceansday.org
    Above Water Seascapes — Honorable MentionA South Atlantic swell breaks on the Dungeons Reef off the Cape Peninsula, South Africa, shot while photographing a big-wave surf session in October 2017. It’s the crescendoing sounds of these breaking swells that always amazes me.Credit: Ken Findlay/ United Nations World Oceans Day www.unworldoceansday.org
    Wonder: Sustaining What Sustains Us — Honorable MentionHumpback whales in their thousands migrate along the Ningaloo Reef in Western Australia every year on the way to and from their calving grounds. In four seasons of swimming with them on the reef here, this is the only encounter I’ve had like this one. This pair of huge adult whales repeatedly spy-hopped alongside us, seeking to interact with and investigate us, leaving me completely breathless. The female in the foreground was much more confident than the male behind and would constantly make close approaches, whilst the male hung back a little, still interested but shy. After more than 10 years working with wildlife in the water, this was one of the best experiences of my life.Credit: Ollie Clarke/ United Nations World Oceans Day www.unworldoceansday.org
    Big and Small Underwater Faces — 2nd PlaceOn one of my many blackwater dives in Anilao, in the Philippines, my guide and I spotted something moving erratically at a depth of around 20 meters, about 10 to 15 centimeters in size. We quickly realized that it was a rare blanket octopus. As we approached, it opened up its beautiful blanket, revealing its multicolored mantle. I managed to take a few shots before it went on its way. I felt truly privileged to have captured this fascinating deep-sea cephalopod. Among its many unique characteristics, this species exhibits some of the most extreme sexual size-dimorphism in nature, with females weighing up to 40,000 times more than males.Credit: Giacomo Marchione/ United Nations World Oceans Day www.unworldoceansday.org
    Big and Small Underwater Faces – WinnerThis photo of a Japanese warbonnetwas captured in the Sea of Japan, about 50 milessouthwest of Vladivostok, Russia. I found the ornate fish at a depth of about 30 meters, under the stern of a shipwreck. This species does not appear to be afraid of divers—on the contrary, it seems to enjoy the attention—and it even tried to sit on the dome port of my camera.Credit: Andrey Nosik/ United Nations World Oceans Day www.unworldoceansday.org
    Wonder: Sustaining What Sustains Us — 2nd PlaceA juvenile pinnate batfishcaptured with a slow shutter speed, a snooted light, and deliberate camera panning to create a sense of motion and drama. Juvenile pinnate batfish are known for their striking black bodies outlined in vibrant orange—a coloration they lose within just a few months as they mature. I encountered this restless subject in the tropical waters of Indonesia’s Lembeh Strait. Capturing this image took patience and persistence over two dives, as these active young fish constantly dart for cover in crevices, making the shot particularly challenging.Credit: Luis Arpa/ United Nations World Oceans Day www.unworldoceansday.org
    #riveting #images #world #oceans #dayphoto
    15 riveting images from the 2025 UN World Oceans Day Photo Competition
    Big and Small Underwater Faces — 3rd Place. Trips to the Antarctic Peninsula always yield amazing encounters with leopard seals. Boldly approaching me and baring his teeth, this individual was keen to point out that this part of Antarctica was his territory. This picture was shot at dusk, resulting in the rather moody atmosphere.   Credit: Lars von Ritter Zahony/ World Ocean’s Day Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. The striking eye of a humpback whale named Sweet Girl peers at the camera. Just four days later, she would be dead, hit by a speeding boat and one of the 20,000 whales killed by ship strikes each year. Photographer Rachel Moore’s captivating imageof Sweet Girl earned top honors at the 2025 United Nations World Oceans Day Photo Competition. Wonder: Sustaining What Sustains Us — WinnerThis photo, taken in Mo’orea, French Polynesia in 2024, captures the eye of a humpback whale named Sweet Girl, just days before her tragic death. Four days after I captured this intimate moment, she was struck and killed by a fast-moving ship. Her death serves as a heartbreaking reminder of the 20,000 whales lost to ship strikes every year. We are using her story to advocate for stronger protections, petitioning for stricter speed laws around Tahiti and Mo’orea during whale season. I hope Sweet Girl’s legacy will spark real change to protect these incredible animals and prevent further senseless loss.Credit: Rachel Moore/ United Nations World Oceans Day www.unworldoceansday.org Now in its twelfth year, the competition coordinated in collaboration between the UN Division for Ocean Affairs and the Law of the Sea, DivePhotoGuide, Oceanic Global, and  the Intergovernmental Oceanographic Commission of UNESCO. Each year, thousands of underwater photographers submit images that judges award prizes for across four categories: Big and Small Underwater Faces, Underwater Seascapes, Above Water Seascapes, and Wonder: Sustaining What Sustains Us. This year’s winning images include a curious leopard seal, a swarm of jellyfish, and a very grumpy looking Japanese warbonnet. Given our oceans’ perilous state, all competition participants were required to sign a charter of 14 commitments regarding ethics in photography. Underwater Seascapes — Honorable MentionWith only orcas as their natural predators, leopard seals are Antarctica’s most versatile hunters, preying on everything from fish and cephalopods to penguins and other seals. Gentoo penguins are a favored menu item, and leopard seals can be observed patrolling the waters around their colonies. For this shot, I used a split image to capture both worlds: the gentoo penguin colony in the background with the leopard seal on the hunt in the foreground.Credit: Lars von Ritter Zahony/ United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes – WinnerA serene lake cradled by arid dunes, where a gentle stream breathes life into the heart of Mother Earth’s creation: Captured from an airplane, this image reveals the powerful contrasts and hidden beauty where land and ocean meet, reminding us that the ocean is the source of all life and that everything in nature is deeply connected. The location is a remote stretch of coastline near Shark Bay, Western Australia.Credit: Leander Nardin/ United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes — 3rd PlaceParadise Harbour is one of the most beautiful places on the Antarctic Peninsula. When I visited, the sea was extremely calm, and I was lucky enough to witness a wonderfully clear reflection of the Suárez Glacierin the water. The only problem was the waves created by our speedboat, and the only way to capture the perfect reflection was to lie on the bottom of the boat while it moved towards the glacier.Credit: Andrey Nosik/ United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — 3rd Place“La Rapadura” is a natural hidden treasure on the northern coast of Tenerife, in the Spanish territory of the Canary Islands. Only discovered in 1996, it is one of the most astonishing underwater landscapes in the world, consistently ranking among the planet’s best dive sites. These towering columns of basalt are the result of volcanic processes that occurred between 500,000 and a million years ago. The formation was created when a basaltic lava flow reached the ocean, where, upon cooling and solidifying, it contracted, creating natural structures often compared to the pipes of church organs. Located in a region where marine life has been impacted by once common illegal fishing practices, this stunning natural monument has both geological and ecological value, and scientists and underwater photographers are advocating for its protection.Credit: Pedro Carrillo/ United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — WinnerThis year, I had the incredible opportunity to visit a jellyfish lake during a liveaboard trip around southern Raja Ampat, Indonesia. Being surrounded by millions of jellyfish, which have evolved to lose their stinging ability due to the absence of predators, was one of the most breathtaking experiences I’ve ever had.Credit: Dani Escayola/ United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — 2nd PlaceThis shot captures a school of rays resting at a cleaning station in Mauritius, where strong currents once attracted them regularly. Some rays grew accustomed to divers, allowing close encounters like this. Sadly, after the severe bleaching that the reefs here suffered last year, such gatherings have become rare, and I fear I may not witness this again at the same spot.Credit: Gerald Rambert/ United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — 3rd PlaceShot in Cuba’s Jardines de la Reina—a protected shark sanctuary—this image captures a Caribbean reef shark weaving through a group of silky sharks near the surface. Using a slow shutter and strobes as the shark pivoted sharply, the motion blurred into a wave-like arc across its head, lit by the golden hues of sunset. The abundance and behavior of sharks here is a living symbol of what protected oceans can look like.Credit: Steven Lopez/ United Nations World Oceans Day www.unworldoceansday.org  Above Water Seascapes — 2nd PlaceNorthern gannetssoar above the dramatic cliffs of Scotland’s Hermaness National Nature Reserve, their sleek white bodies and black-tipped wings slicing through the Shetland winds. These seabirds, the largest in the North Atlantic, are renowned for their striking plunge-dives, reaching speeds up to 100 kphas they hunt for fish beneath the waves. The cliffs of Hermaness provide ideal nesting sites, with updrafts aiding their take-offs and landings. Each spring, thousands return to this rugged coastline, forming one of the UK’s most significant gannet colonies. It was a major challenge to take photos at the edge of these cliffs at almost 200 meterswith the winds up to 30 kph.Credit: Nur Tucker/ United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes — Honorable MentionA South Atlantic swell breaks on the Dungeons Reef off the Cape Peninsula, South Africa, shot while photographing a big-wave surf session in October 2017. It’s the crescendoing sounds of these breaking swells that always amazes me.Credit: Ken Findlay/ United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — Honorable MentionHumpback whales in their thousands migrate along the Ningaloo Reef in Western Australia every year on the way to and from their calving grounds. In four seasons of swimming with them on the reef here, this is the only encounter I’ve had like this one. This pair of huge adult whales repeatedly spy-hopped alongside us, seeking to interact with and investigate us, leaving me completely breathless. The female in the foreground was much more confident than the male behind and would constantly make close approaches, whilst the male hung back a little, still interested but shy. After more than 10 years working with wildlife in the water, this was one of the best experiences of my life.Credit: Ollie Clarke/ United Nations World Oceans Day www.unworldoceansday.org Big and Small Underwater Faces — 2nd PlaceOn one of my many blackwater dives in Anilao, in the Philippines, my guide and I spotted something moving erratically at a depth of around 20 meters, about 10 to 15 centimeters in size. We quickly realized that it was a rare blanket octopus. As we approached, it opened up its beautiful blanket, revealing its multicolored mantle. I managed to take a few shots before it went on its way. I felt truly privileged to have captured this fascinating deep-sea cephalopod. Among its many unique characteristics, this species exhibits some of the most extreme sexual size-dimorphism in nature, with females weighing up to 40,000 times more than males.Credit: Giacomo Marchione/ United Nations World Oceans Day www.unworldoceansday.org Big and Small Underwater Faces – WinnerThis photo of a Japanese warbonnetwas captured in the Sea of Japan, about 50 milessouthwest of Vladivostok, Russia. I found the ornate fish at a depth of about 30 meters, under the stern of a shipwreck. This species does not appear to be afraid of divers—on the contrary, it seems to enjoy the attention—and it even tried to sit on the dome port of my camera.Credit: Andrey Nosik/ United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — 2nd PlaceA juvenile pinnate batfishcaptured with a slow shutter speed, a snooted light, and deliberate camera panning to create a sense of motion and drama. Juvenile pinnate batfish are known for their striking black bodies outlined in vibrant orange—a coloration they lose within just a few months as they mature. I encountered this restless subject in the tropical waters of Indonesia’s Lembeh Strait. Capturing this image took patience and persistence over two dives, as these active young fish constantly dart for cover in crevices, making the shot particularly challenging.Credit: Luis Arpa/ United Nations World Oceans Day www.unworldoceansday.org #riveting #images #world #oceans #dayphoto
    WWW.POPSCI.COM
    15 riveting images from the 2025 UN World Oceans Day Photo Competition
    Big and Small Underwater Faces — 3rd Place. Trips to the Antarctic Peninsula always yield amazing encounters with leopard seals (Hydrurga leptonyx). Boldly approaching me and baring his teeth, this individual was keen to point out that this part of Antarctica was his territory. This picture was shot at dusk, resulting in the rather moody atmosphere.   Credit: Lars von Ritter Zahony (Germany) / World Ocean’s Day Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. The striking eye of a humpback whale named Sweet Girl peers at the camera. Just four days later, she would be dead, hit by a speeding boat and one of the 20,000 whales killed by ship strikes each year. Photographer Rachel Moore’s captivating image (seen below) of Sweet Girl earned top honors at the 2025 United Nations World Oceans Day Photo Competition. Wonder: Sustaining What Sustains Us — WinnerThis photo, taken in Mo’orea, French Polynesia in 2024, captures the eye of a humpback whale named Sweet Girl, just days before her tragic death. Four days after I captured this intimate moment, she was struck and killed by a fast-moving ship. Her death serves as a heartbreaking reminder of the 20,000 whales lost to ship strikes every year. We are using her story to advocate for stronger protections, petitioning for stricter speed laws around Tahiti and Mo’orea during whale season. I hope Sweet Girl’s legacy will spark real change to protect these incredible animals and prevent further senseless loss.Credit: Rachel Moore (USA) / United Nations World Oceans Day www.unworldoceansday.org Now in its twelfth year, the competition coordinated in collaboration between the UN Division for Ocean Affairs and the Law of the Sea, DivePhotoGuide (DPG), Oceanic Global, and  the Intergovernmental Oceanographic Commission of UNESCO. Each year, thousands of underwater photographers submit images that judges award prizes for across four categories: Big and Small Underwater Faces, Underwater Seascapes, Above Water Seascapes, and Wonder: Sustaining What Sustains Us. This year’s winning images include a curious leopard seal, a swarm of jellyfish, and a very grumpy looking Japanese warbonnet. Given our oceans’ perilous state, all competition participants were required to sign a charter of 14 commitments regarding ethics in photography. Underwater Seascapes — Honorable MentionWith only orcas as their natural predators, leopard seals are Antarctica’s most versatile hunters, preying on everything from fish and cephalopods to penguins and other seals. Gentoo penguins are a favored menu item, and leopard seals can be observed patrolling the waters around their colonies. For this shot, I used a split image to capture both worlds: the gentoo penguin colony in the background with the leopard seal on the hunt in the foreground.Credit: Lars von Ritter Zahony (Germany) / United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes – WinnerA serene lake cradled by arid dunes, where a gentle stream breathes life into the heart of Mother Earth’s creation: Captured from an airplane, this image reveals the powerful contrasts and hidden beauty where land and ocean meet, reminding us that the ocean is the source of all life and that everything in nature is deeply connected. The location is a remote stretch of coastline near Shark Bay, Western Australia.Credit: Leander Nardin (Austria) / United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes — 3rd PlaceParadise Harbour is one of the most beautiful places on the Antarctic Peninsula. When I visited, the sea was extremely calm, and I was lucky enough to witness a wonderfully clear reflection of the Suárez Glacier (aka Petzval Glacier) in the water. The only problem was the waves created by our speedboat, and the only way to capture the perfect reflection was to lie on the bottom of the boat while it moved towards the glacier.Credit: Andrey Nosik (Russia) / United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — 3rd Place“La Rapadura” is a natural hidden treasure on the northern coast of Tenerife, in the Spanish territory of the Canary Islands. Only discovered in 1996, it is one of the most astonishing underwater landscapes in the world, consistently ranking among the planet’s best dive sites. These towering columns of basalt are the result of volcanic processes that occurred between 500,000 and a million years ago. The formation was created when a basaltic lava flow reached the ocean, where, upon cooling and solidifying, it contracted, creating natural structures often compared to the pipes of church organs. Located in a region where marine life has been impacted by once common illegal fishing practices, this stunning natural monument has both geological and ecological value, and scientists and underwater photographers are advocating for its protection. (Model: Yolanda Garcia)Credit: Pedro Carrillo (Spain) / United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — WinnerThis year, I had the incredible opportunity to visit a jellyfish lake during a liveaboard trip around southern Raja Ampat, Indonesia. Being surrounded by millions of jellyfish, which have evolved to lose their stinging ability due to the absence of predators, was one of the most breathtaking experiences I’ve ever had.Credit: Dani Escayola (Spain) / United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — 2nd PlaceThis shot captures a school of rays resting at a cleaning station in Mauritius, where strong currents once attracted them regularly. Some rays grew accustomed to divers, allowing close encounters like this. Sadly, after the severe bleaching that the reefs here suffered last year, such gatherings have become rare, and I fear I may not witness this again at the same spot.Credit: Gerald Rambert (Mauritius) / United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — 3rd PlaceShot in Cuba’s Jardines de la Reina—a protected shark sanctuary—this image captures a Caribbean reef shark weaving through a group of silky sharks near the surface. Using a slow shutter and strobes as the shark pivoted sharply, the motion blurred into a wave-like arc across its head, lit by the golden hues of sunset. The abundance and behavior of sharks here is a living symbol of what protected oceans can look like.Credit: Steven Lopez (USA) / United Nations World Oceans Day www.unworldoceansday.org  Above Water Seascapes — 2nd PlaceNorthern gannets (Morus bassanus) soar above the dramatic cliffs of Scotland’s Hermaness National Nature Reserve, their sleek white bodies and black-tipped wings slicing through the Shetland winds. These seabirds, the largest in the North Atlantic, are renowned for their striking plunge-dives, reaching speeds up to 100 kph (60 mph) as they hunt for fish beneath the waves. The cliffs of Hermaness provide ideal nesting sites, with updrafts aiding their take-offs and landings. Each spring, thousands return to this rugged coastline, forming one of the UK’s most significant gannet colonies. It was a major challenge to take photos at the edge of these cliffs at almost 200 meters (650 feet) with the winds up to 30 kph (20 mph).Credit: Nur Tucker (UK/Turkey) / United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes — Honorable MentionA South Atlantic swell breaks on the Dungeons Reef off the Cape Peninsula, South Africa, shot while photographing a big-wave surf session in October 2017. It’s the crescendoing sounds of these breaking swells that always amazes me.Credit: Ken Findlay (South Africa) / United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — Honorable MentionHumpback whales in their thousands migrate along the Ningaloo Reef in Western Australia every year on the way to and from their calving grounds. In four seasons of swimming with them on the reef here, this is the only encounter I’ve had like this one. This pair of huge adult whales repeatedly spy-hopped alongside us, seeking to interact with and investigate us, leaving me completely breathless. The female in the foreground was much more confident than the male behind and would constantly make close approaches, whilst the male hung back a little, still interested but shy. After more than 10 years working with wildlife in the water, this was one of the best experiences of my life.Credit: Ollie Clarke (UK) / United Nations World Oceans Day www.unworldoceansday.org Big and Small Underwater Faces — 2nd PlaceOn one of my many blackwater dives in Anilao, in the Philippines, my guide and I spotted something moving erratically at a depth of around 20 meters (65 feet), about 10 to 15 centimeters in size. We quickly realized that it was a rare blanket octopus (Tremoctopus sp.). As we approached, it opened up its beautiful blanket, revealing its multicolored mantle. I managed to take a few shots before it went on its way. I felt truly privileged to have captured this fascinating deep-sea cephalopod. Among its many unique characteristics, this species exhibits some of the most extreme sexual size-dimorphism in nature, with females weighing up to 40,000 times more than males.Credit: Giacomo Marchione (Italy) / United Nations World Oceans Day www.unworldoceansday.org Big and Small Underwater Faces – WinnerThis photo of a Japanese warbonnet (Chirolophis japonicus) was captured in the Sea of Japan, about 50 miles (80 kilometers) southwest of Vladivostok, Russia. I found the ornate fish at a depth of about 30 meters (100 feet), under the stern of a shipwreck. This species does not appear to be afraid of divers—on the contrary, it seems to enjoy the attention—and it even tried to sit on the dome port of my camera.Credit: Andrey Nosik (Russia) / United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — 2nd PlaceA juvenile pinnate batfish (Platax pinnatus) captured with a slow shutter speed, a snooted light, and deliberate camera panning to create a sense of motion and drama. Juvenile pinnate batfish are known for their striking black bodies outlined in vibrant orange—a coloration they lose within just a few months as they mature. I encountered this restless subject in the tropical waters of Indonesia’s Lembeh Strait. Capturing this image took patience and persistence over two dives, as these active young fish constantly dart for cover in crevices, making the shot particularly challenging.Credit: Luis Arpa (Spain) / United Nations World Oceans Day www.unworldoceansday.org
    0 Comentários 0 Compartilhamentos 0 Anterior
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Why Half Backsplashes Are Taking Over Kitchen Design, According to Experts

    Pictured Above: Designer Amber Lewis balances New England charm with old-world sophistication with a half Calacatta Vagli marble backsplash in the kitchen of this Martha's Vineyard home. To backsplash or not to backsplash? That is the question. Or is it? Because if anyone’s ever told you “you shouldn’t do anything halfway,” they clearly haven’t heard of the half backsplash. This twist on a design mainstay makes a compelling case for stopping short. So maybe the real question is: to backsplash or to half backsplash?Lately, we’ve seen more and more designers going for the latter. “A trend these days is to use 1/2 or 2/3 stone backsplashes with a six- to nine-inch ledge,” says designer Jennifer Gilmer. “This is typically used behind a range and adds interest as well as softening the overall look.” It’s not just aesthetic—it’s strategic functionality. “The ledge is useful for salt and pepper shakers, olive oil, and other items,” she adds. Ahead, we break down everything to know about half backsplashes and why this kitchen trend is gaining traction in the design world.Related StoriesWhat Is a Half Backsplash?Lisa PetroleMagnolia’s director of styling, Ashley Maddox, enlisted the help of designer Hilary Walker to create her midcentury-modern dream home in Waco, Texas. Complete with walnut kitchen cabinetry topped with a Topzstone countertop continued into a partial backsplash.“A half backsplash or 1/3 backsplash is when the material stops at a point on the wall determined by the design,” explains designer Isabella Patrick. This makes it distinct from a “built-out or existing element, such as upper cabinets, a ceiling, soffit, or some other inherent element of the space.” In other words, it’s intentional, not just the result of running out of tile.Courtesy of JN Interior SpacesTaking the ceiling height into consideration, JN Interior Spaces decided a half backsplash would be suitable for this sleek, modern kitchen.While traditional backsplashes typically reach the bottom of upper cabinetry or span the entire wall, partial backsplashes usually stop somewhere around four to 25 inches up, depending on the look you’re going for.And while it may sound like a design compromise, it’s actually quite the opposite.Related StoryWhy Designers Are Loving the Half-Height LookOpting for a half backsplash is a clever way to balance proportion, budget, and visual interest. “If the design does not have upper cabinets, we would opt for a half backsplash to create visual interest,” Patrick says. “A full wall of the same tile or stone could overwhelm the space and seem like an afterthought.”Shannon Dupre/DD RepsIsabella Patrick experimented with this concept in her own kitchen, mixing materials for a more layered half backsplash look.Instead, Patrick often mixes materials—like running Cambria quartzite up from the counter to a ledge, then switching to Fireclay tile above. “This is a great example of how a singular material would have overwhelmed the space but also may have felt like an afterthought,” she explains. “Mixing materials and adding in details and personal touches is what good design is.”Another bonus? It lets the rest of the kitchen sing. “In another design, we eliminated the upper cabinets in favor of a more open and airy look so that the windows were not blocked—and so you were not walking right into a side view of cabinetry,” Patrick says. “No upper cabinets also makes the kitchen feel more of a transitional space and decorative, especially since it opens right into a dining room.”krafty_photos
copyright 2021This kitchen from JN Interior Spaces proves that a partial backsplash can still make a big impact. They chose to use an iridescent, almost-patina tile in this Wyoming kitchen.For Jill Najinigier of JN Interior Spaces, the choice is just as much about form as it is function. “It's all about how the backsplash interacts with the architecture,” she explains. “Wall height, windows, the shape of the hood, upper cabinets, or open shelves—where do they start and terminate?”In one standout project, Najinigier used a luminous tile just tall enough to tuck under a tapered plaster hood, topped with a narrow stone ledge carved from the same slab as the counter. The result? “Clean lines that make a stunning statement.”Mixing materials and adding in details and personal touches is what good design is.It’s Decorative and FunctionalHeather TalbertDesigner Kate Pearce installed a statement-making marble backsplash. Bringing it only halfway up allows its beauty to be appreciated while giving the other aesthetic elements in the space room to breathe.Don’t underestimate what that ledge can do. Designer Kate Pearce swears by hers: “I love my little five-inch-deep marble shelf that allows me to style some vintage kitchenware in the space,” she says. “And I think the shelfis exactly what gives the kitchen an approachable feel—versus having a full backsplash of marble, which would have given the space a more serious vibe.”Stylish ProductionsPrioritizing visually continuity, Italian designer Federica Asack of Masseria Chic used the same leathered sandstone, a natural material that will develop a wonderful patina, for both the counters and the backsplash.Designer Federica Asack of Masseria Chic used a leathered sandstone for both her countertop and half backsplash, adding a ledge that’s just deep enough to style. “It allows for a splash-free decorating opportunity to layer artwork and favorite objects,” she says.Designer Molly Watson agrees: “The simple shelf is just deep enough for some special items to be on display,” she notes of a project where carrying the countertop stone up the wall helped keep things visually calm and scaled to the space. Related StoryThe Verdict on Half BacksplashesErin Kelly"Keeping materials simple in this kitchen was important for scale," says designer Molly Watson. "Carrying the countertop up the wall as a backsplash allowed the space to feel larger."Half backsplashes are having a major design moment, but not just because they’re practical. They’re a blank canvas for creativity. From floating ledges and mixed materials to budget-conscious decisions that don’t skimp on style, they’re a smartway to make your kitchen feel lighter, livelier, and totally considered.So, go ahead—do it halfway.Follow House Beautiful on Instagram and TikTok.
    #why #half #backsplashes #are #taking
    Why Half Backsplashes Are Taking Over Kitchen Design, According to Experts
    Pictured Above: Designer Amber Lewis balances New England charm with old-world sophistication with a half Calacatta Vagli marble backsplash in the kitchen of this Martha's Vineyard home. To backsplash or not to backsplash? That is the question. Or is it? Because if anyone’s ever told you “you shouldn’t do anything halfway,” they clearly haven’t heard of the half backsplash. This twist on a design mainstay makes a compelling case for stopping short. So maybe the real question is: to backsplash or to half backsplash?Lately, we’ve seen more and more designers going for the latter. “A trend these days is to use 1/2 or 2/3 stone backsplashes with a six- to nine-inch ledge,” says designer Jennifer Gilmer. “This is typically used behind a range and adds interest as well as softening the overall look.” It’s not just aesthetic—it’s strategic functionality. “The ledge is useful for salt and pepper shakers, olive oil, and other items,” she adds. Ahead, we break down everything to know about half backsplashes and why this kitchen trend is gaining traction in the design world.Related StoriesWhat Is a Half Backsplash?Lisa PetroleMagnolia’s director of styling, Ashley Maddox, enlisted the help of designer Hilary Walker to create her midcentury-modern dream home in Waco, Texas. Complete with walnut kitchen cabinetry topped with a Topzstone countertop continued into a partial backsplash.“A half backsplash or 1/3 backsplash is when the material stops at a point on the wall determined by the design,” explains designer Isabella Patrick. This makes it distinct from a “built-out or existing element, such as upper cabinets, a ceiling, soffit, or some other inherent element of the space.” In other words, it’s intentional, not just the result of running out of tile.Courtesy of JN Interior SpacesTaking the ceiling height into consideration, JN Interior Spaces decided a half backsplash would be suitable for this sleek, modern kitchen.While traditional backsplashes typically reach the bottom of upper cabinetry or span the entire wall, partial backsplashes usually stop somewhere around four to 25 inches up, depending on the look you’re going for.And while it may sound like a design compromise, it’s actually quite the opposite.Related StoryWhy Designers Are Loving the Half-Height LookOpting for a half backsplash is a clever way to balance proportion, budget, and visual interest. “If the design does not have upper cabinets, we would opt for a half backsplash to create visual interest,” Patrick says. “A full wall of the same tile or stone could overwhelm the space and seem like an afterthought.”Shannon Dupre/DD RepsIsabella Patrick experimented with this concept in her own kitchen, mixing materials for a more layered half backsplash look.Instead, Patrick often mixes materials—like running Cambria quartzite up from the counter to a ledge, then switching to Fireclay tile above. “This is a great example of how a singular material would have overwhelmed the space but also may have felt like an afterthought,” she explains. “Mixing materials and adding in details and personal touches is what good design is.”Another bonus? It lets the rest of the kitchen sing. “In another design, we eliminated the upper cabinets in favor of a more open and airy look so that the windows were not blocked—and so you were not walking right into a side view of cabinetry,” Patrick says. “No upper cabinets also makes the kitchen feel more of a transitional space and decorative, especially since it opens right into a dining room.”krafty_photos
copyright 2021This kitchen from JN Interior Spaces proves that a partial backsplash can still make a big impact. They chose to use an iridescent, almost-patina tile in this Wyoming kitchen.For Jill Najinigier of JN Interior Spaces, the choice is just as much about form as it is function. “It's all about how the backsplash interacts with the architecture,” she explains. “Wall height, windows, the shape of the hood, upper cabinets, or open shelves—where do they start and terminate?”In one standout project, Najinigier used a luminous tile just tall enough to tuck under a tapered plaster hood, topped with a narrow stone ledge carved from the same slab as the counter. The result? “Clean lines that make a stunning statement.”Mixing materials and adding in details and personal touches is what good design is.It’s Decorative and FunctionalHeather TalbertDesigner Kate Pearce installed a statement-making marble backsplash. Bringing it only halfway up allows its beauty to be appreciated while giving the other aesthetic elements in the space room to breathe.Don’t underestimate what that ledge can do. Designer Kate Pearce swears by hers: “I love my little five-inch-deep marble shelf that allows me to style some vintage kitchenware in the space,” she says. “And I think the shelfis exactly what gives the kitchen an approachable feel—versus having a full backsplash of marble, which would have given the space a more serious vibe.”Stylish ProductionsPrioritizing visually continuity, Italian designer Federica Asack of Masseria Chic used the same leathered sandstone, a natural material that will develop a wonderful patina, for both the counters and the backsplash.Designer Federica Asack of Masseria Chic used a leathered sandstone for both her countertop and half backsplash, adding a ledge that’s just deep enough to style. “It allows for a splash-free decorating opportunity to layer artwork and favorite objects,” she says.Designer Molly Watson agrees: “The simple shelf is just deep enough for some special items to be on display,” she notes of a project where carrying the countertop stone up the wall helped keep things visually calm and scaled to the space. Related StoryThe Verdict on Half BacksplashesErin Kelly"Keeping materials simple in this kitchen was important for scale," says designer Molly Watson. "Carrying the countertop up the wall as a backsplash allowed the space to feel larger."Half backsplashes are having a major design moment, but not just because they’re practical. They’re a blank canvas for creativity. From floating ledges and mixed materials to budget-conscious decisions that don’t skimp on style, they’re a smartway to make your kitchen feel lighter, livelier, and totally considered.So, go ahead—do it halfway.Follow House Beautiful on Instagram and TikTok. #why #half #backsplashes #are #taking
    WWW.HOUSEBEAUTIFUL.COM
    Why Half Backsplashes Are Taking Over Kitchen Design, According to Experts
    Pictured Above: Designer Amber Lewis balances New England charm with old-world sophistication with a half Calacatta Vagli marble backsplash in the kitchen of this Martha's Vineyard home. To backsplash or not to backsplash? That is the question. Or is it? Because if anyone’s ever told you “you shouldn’t do anything halfway,” they clearly haven’t heard of the half backsplash. This twist on a design mainstay makes a compelling case for stopping short. So maybe the real question is: to backsplash or to half backsplash?Lately, we’ve seen more and more designers going for the latter. “A trend these days is to use 1/2 or 2/3 stone backsplashes with a six- to nine-inch ledge,” says designer Jennifer Gilmer. “This is typically used behind a range and adds interest as well as softening the overall look.” It’s not just aesthetic—it’s strategic functionality. “The ledge is useful for salt and pepper shakers, olive oil, and other items,” she adds. Ahead, we break down everything to know about half backsplashes and why this kitchen trend is gaining traction in the design world.Related StoriesWhat Is a Half Backsplash?Lisa PetroleMagnolia’s director of styling, Ashley Maddox, enlisted the help of designer Hilary Walker to create her midcentury-modern dream home in Waco, Texas. Complete with walnut kitchen cabinetry topped with a Topzstone countertop continued into a partial backsplash.“A half backsplash or 1/3 backsplash is when the material stops at a point on the wall determined by the design,” explains designer Isabella Patrick. This makes it distinct from a “built-out or existing element, such as upper cabinets, a ceiling, soffit, or some other inherent element of the space.” In other words, it’s intentional, not just the result of running out of tile.Courtesy of JN Interior SpacesTaking the ceiling height into consideration, JN Interior Spaces decided a half backsplash would be suitable for this sleek, modern kitchen.While traditional backsplashes typically reach the bottom of upper cabinetry or span the entire wall, partial backsplashes usually stop somewhere around four to 25 inches up, depending on the look you’re going for.And while it may sound like a design compromise, it’s actually quite the opposite.Related StoryWhy Designers Are Loving the Half-Height LookOpting for a half backsplash is a clever way to balance proportion, budget, and visual interest. “If the design does not have upper cabinets, we would opt for a half backsplash to create visual interest,” Patrick says. “A full wall of the same tile or stone could overwhelm the space and seem like an afterthought.”Shannon Dupre/DD RepsIsabella Patrick experimented with this concept in her own kitchen, mixing materials for a more layered half backsplash look.Instead, Patrick often mixes materials—like running Cambria quartzite up from the counter to a ledge, then switching to Fireclay tile above. “This is a great example of how a singular material would have overwhelmed the space but also may have felt like an afterthought,” she explains. “Mixing materials and adding in details and personal touches is what good design is.”Another bonus? It lets the rest of the kitchen sing. “In another design, we eliminated the upper cabinets in favor of a more open and airy look so that the windows were not blocked—and so you were not walking right into a side view of cabinetry,” Patrick says. “No upper cabinets also makes the kitchen feel more of a transitional space and decorative, especially since it opens right into a dining room.”krafty_photos
copyright 2021This kitchen from JN Interior Spaces proves that a partial backsplash can still make a big impact. They chose to use an iridescent, almost-patina tile in this Wyoming kitchen.For Jill Najinigier of JN Interior Spaces, the choice is just as much about form as it is function. “It's all about how the backsplash interacts with the architecture,” she explains. “Wall height, windows, the shape of the hood, upper cabinets, or open shelves—where do they start and terminate?”In one standout project, Najinigier used a luminous tile just tall enough to tuck under a tapered plaster hood, topped with a narrow stone ledge carved from the same slab as the counter. The result? “Clean lines that make a stunning statement.”Mixing materials and adding in details and personal touches is what good design is.It’s Decorative and FunctionalHeather TalbertDesigner Kate Pearce installed a statement-making marble backsplash. Bringing it only halfway up allows its beauty to be appreciated while giving the other aesthetic elements in the space room to breathe.Don’t underestimate what that ledge can do. Designer Kate Pearce swears by hers: “I love my little five-inch-deep marble shelf that allows me to style some vintage kitchenware in the space,” she says. “And I think the shelf (and the pieces styled on it) is exactly what gives the kitchen an approachable feel—versus having a full backsplash of marble, which would have given the space a more serious vibe.”Stylish ProductionsPrioritizing visually continuity, Italian designer Federica Asack of Masseria Chic used the same leathered sandstone, a natural material that will develop a wonderful patina, for both the counters and the backsplash.Designer Federica Asack of Masseria Chic used a leathered sandstone for both her countertop and half backsplash, adding a ledge that’s just deep enough to style. “It allows for a splash-free decorating opportunity to layer artwork and favorite objects,” she says.Designer Molly Watson agrees: “The simple shelf is just deep enough for some special items to be on display,” she notes of a project where carrying the countertop stone up the wall helped keep things visually calm and scaled to the space. Related StoryThe Verdict on Half BacksplashesErin Kelly"Keeping materials simple in this kitchen was important for scale," says designer Molly Watson. "Carrying the countertop up the wall as a backsplash allowed the space to feel larger."Half backsplashes are having a major design moment, but not just because they’re practical. They’re a blank canvas for creativity. From floating ledges and mixed materials to budget-conscious decisions that don’t skimp on style, they’re a smart (and stylish) way to make your kitchen feel lighter, livelier, and totally considered.So, go ahead—do it halfway.Follow House Beautiful on Instagram and TikTok.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • How a planetarium show discovered a spiral at the edge of our solar system

    If you’ve ever flown through outer space, at least while watching a documentary or a science fiction film, you’ve seen how artists turn astronomical findings into stunning visuals. But in the process of visualizing data for their latest planetarium show, a production team at New York’s American Museum of Natural History made a surprising discovery of their own: a trillion-and-a-half mile long spiral of material drifting along the edge of our solar system.

    “So this is a really fun thing that happened,” says Jackie Faherty, the museum’s senior scientist.

    Last winter, Faherty and her colleagues were beneath the dome of the museum’s Hayden Planetarium, fine-tuning a scene that featured the Oort cloud, the big, thick bubble surrounding our Sun and planets that’s filled with ice and rock and other remnants from the solar system’s infancy. The Oort cloud begins far beyond Neptune, around one and a half light years from the Sun. It has never been directly observed; its existence is inferred from the behavior of long-period comets entering the inner solar system. The cloud is so expansive that the Voyager spacecraft, our most distant probes, would need another 250 years just to reach its inner boundary; to reach the other side, they would need about 30,000 years. 

    The 30-minute show, Encounters in the Milky Way, narrated by Pedro Pascal, guides audiences on a trip through the galaxy across billions of years. For a section about our nascent solar system, the writing team decided “there’s going to be a fly-by” of the Oort cloud, Faherty says. “But what does our Oort cloud look like?” 

    To find out, the museum consulted astronomers and turned to David Nesvorný, a scientist at the Southwest Research Institute in San Antonio. He provided his model of the millions of particles believed to make up the Oort cloud, based on extensive observational data.

    “Everybody said, go talk to Nesvorný. He’s got the best model,” says Faherty. And “everybody told us, ‘There’s structure in the model,’ so we were kind of set up to look for stuff,” she says. 

    The museum’s technical team began using Nesvorný’s model to simulate how the cloud evolved over time. Later, as the team projected versions of the fly-by scene into the dome, with the camera looking back at the Oort cloud, they saw a familiar shape, one that appears in galaxies, Saturn’s rings, and disks around young stars.

    “We’re flying away from the Oort cloud and out pops this spiral, a spiral shape to the outside of our solar system,” Faherty marveled. “A huge structure, millions and millions of particles.”

    She emailed Nesvorný to ask for “more particles,” with a render of the scene attached. “We noticed the spiral of course,” she wrote. “And then he writes me back: ‘what are you talking about, a spiral?’” 

    While fine-tuning a simulation of the Oort cloud, a vast expanse of ice material leftover from the birth of our Sun, the ‘Encounters in the Milky Way’ production team noticed a very clear shape: a structure made of billions of comets and shaped like a spiral-armed galaxy, seen here in a scene from the final Space ShowMore simulations ensued, this time on Pleiades, a powerful NASA supercomputer. In high-performance computer simulations spanning 4.6 billion years, starting from the Solar System’s earliest days, the researchers visualized how the initial icy and rocky ingredients of the Oort cloud began circling the Sun, in the elliptical orbits that are thought to give the cloud its rough disc shape. The simulations also incorporated the physics of the Sun’s gravitational pull, the influences from our Milky Way galaxy, and the movements of the comets themselves. 

    In each simulation, the spiral persisted.

    “No one has ever seen the Oort structure like that before,” says Faherty. Nesvorný “has a great quote about this: ‘The math was all there. We just needed the visuals.’” 

    An illustration of the Kuiper Belt and Oort Cloud in relation to our solar system.As the Oort cloud grew with the early solar system, Nesvorný and his colleagues hypothesize that the galactic tide, or the gravitational force from the Milky Way, disrupted the orbits of some comets. Although the Sun pulls these objects inward, the galaxy’s gravity appears to have twisted part of the Oort cloud outward, forming a spiral tilted roughly 30 degrees from the plane of the solar system.

    “As the galactic tide acts to decouple bodies from the scattered disk it creates a spiral structure in physical space that is roughly 15,000 astronomical units in length,” or around 1.4 trillion miles from one end to the other, the researchers write in a paper that was published in March in the Astrophysical Journal. “The spiral is long-lived and persists in the inner Oort Cloud to the present time.”

    “The physics makes sense,” says Faherty. “Scientists, we’re amazing at what we do, but it doesn’t mean we can see everything right away.”

    It helped that the team behind the space show was primed to look for something, says Carter Emmart, the museum’s director of astrovisualization and director of Encounters. Astronomers had described Nesvorný’s model as having “a structure,” which intrigued the team’s artists. “We were also looking for structure so that it wouldn’t just be sort of like a big blob,” he says. “Other models were also revealing this—but they just hadn’t been visualized.”

    The museum’s attempts to simulate nature date back to its first habitat dioramas in the early 1900s, which brought visitors to places that hadn’t yet been captured by color photos, TV, or the web. The planetarium, a night sky simulator for generations of would-be scientists and astronauts, got its start after financier Charles Hayden bought the museum its first Zeiss projector. The planetarium now boasts one of the world’s few Zeiss Mark IX systems.

    Still, these days the star projector is rarely used, Emmart says, now that fulldome laser projectors can turn the old static starfield into 3D video running at 60 frames per second. The Hayden boasts six custom-built Christie projectors, part of what the museum’s former president called “the most advanced planetarium ever attempted.”

     In about 1.3 million years, the star system Gliese 710 is set to pass directly through our Oort Cloud, an event visualized in a dramatic scene in ‘Encounters in the Milky Way.’ During its flyby, our systems will swap icy comets, flinging some out on new paths.Emmart recalls how in 1998, when he and other museum leaders were imagining the future of space shows at the Hayden—now with the help of digital projectors and computer graphics—there were questions over how much space they could try to show.

    “We’re talking about these astronomical data sets we could plot to make the galaxy and the stars,” he says. “Of course, we knew that we would have this star projector, but we really wanted to emphasize astrophysics with this dome video system. I was drawing pictures of this just to get our heads around it and noting the tip of the solar system to the Milky Way is about 60 degrees. And I said, what are we gonna do when we get outside the Milky Way?’

    “ThenNeil Degrasse Tyson “goes, ‘whoa, whoa, whoa, Carter, we have enough to do. And just plotting the Milky Way, that’s hard enough.’ And I said, ‘well, when we exit the Milky Way and we don’t see any other galaxies, that’s sort of like astronomy in 1920—we thought maybe the entire universe is just a Milky Way.'”

    “And that kind of led to a chaotic discussion about, well, what other data sets are there for this?” Emmart adds.

    The museum worked with astronomer Brent Tully, who had mapped 3500 galaxies beyond the Milky Way, in collaboration with the National Center for Super Computing Applications. “That was it,” he says, “and that seemed fantastical.”

    By the time the first planetarium show opened at the museum’s new Rose Center for Earth and Space in 2000, Tully had broadened his survey “to an amazing” 30,000 galaxies. The Sloan Digital Sky Survey followed—it’s now at data release 18—with six million galaxies.

    To build the map of the universe that underlies Encounters, the team also relied on data from the European Space Agency’s space observatory, Gaia. Launched in 2013 and powered down in March of this year, Gaia brought an unprecedented precision to our astronomical map, plotting the distance between 1.7 billion stars. To visualize and render the simulated data, Jon Parker, the museum’s lead technical director, relied on Houdini, a 3D animation tool by Toronto-based SideFX.

    The goal is immersion, “whether it’s in front of the buffalo downstairs, and seeing what those herds were like before we decimated them, to coming in this room and being teleported to space, with an accurate foundation in the science,” Emmart says. “But the art is important, because the art is the way to the soul.” 

    The museum, he adds, is “a testament to wonder. And I think wonder is a gateway to inspiration, and inspiration is a gateway to motivation.”

    Three-D visuals aren’t just powerful tools for communicating science, but increasingly crucial for science itself. Software like OpenSpace, an open source simulation tool developed by the museum, along with the growing availability of high-performance computing, are making it easier to build highly detailed visuals of ever larger and more complex collections of data.

    “Anytime we look, literally, from a different angle at catalogs of astronomical positions, simulations, or exploring the phase space of a complex data set, there is great potential to discover something new,” says Brian R. Kent, an astronomer and director of science communications at National Radio Astronomy Observatory. “There is also a wealth of astronomics tatical data in archives that can be reanalyzed in new ways, leading to new discoveries.”

    As the instruments grow in size and sophistication, so does the data, and the challenge of understanding it. Like all scientists, astronomers are facing a deluge of data, ranging from gamma rays and X-rays to ultraviolet, optical, infrared, and radio bands.

    Our Oort cloud, a shell of icy bodies that surrounds the solar system and extends one-and-a-half light years in every direction, is shown in this scene from ‘Encounters in the Milky Way’ along with the Oort clouds of neighboring stars. The more massive the star, the larger its Oort cloud“New facilities like the Next Generation Very Large Array here at NRAO or the Vera Rubin Observatory and LSST survey project will generate large volumes of data, so astronomers have to get creative with how to analyze it,” says Kent. 

    More data—and new instruments—will also be needed to prove the spiral itself is actually there: there’s still no known way to even observe the Oort cloud. 

    Instead, the paper notes, the structure will have to be measured from “detection of a large number of objects” in the radius of the inner Oort cloud or from “thermal emission from small particles in the Oort spiral.” 

    The Vera C. Rubin Observatory, a powerful, U.S.-funded telescope that recently began operation in Chile, could possibly observe individual icy bodies within the cloud. But researchers expect the telescope will likely discover only dozens of these objects, maybe hundreds, not enough to meaningfully visualize any shapes in the Oort cloud. 

    For us, here and now, the 1.4 trillion mile-long spiral will remain confined to the inside of a dark dome across the street from Central Park.
    #how #planetarium #show #discovered #spiral
    How a planetarium show discovered a spiral at the edge of our solar system
    If you’ve ever flown through outer space, at least while watching a documentary or a science fiction film, you’ve seen how artists turn astronomical findings into stunning visuals. But in the process of visualizing data for their latest planetarium show, a production team at New York’s American Museum of Natural History made a surprising discovery of their own: a trillion-and-a-half mile long spiral of material drifting along the edge of our solar system. “So this is a really fun thing that happened,” says Jackie Faherty, the museum’s senior scientist. Last winter, Faherty and her colleagues were beneath the dome of the museum’s Hayden Planetarium, fine-tuning a scene that featured the Oort cloud, the big, thick bubble surrounding our Sun and planets that’s filled with ice and rock and other remnants from the solar system’s infancy. The Oort cloud begins far beyond Neptune, around one and a half light years from the Sun. It has never been directly observed; its existence is inferred from the behavior of long-period comets entering the inner solar system. The cloud is so expansive that the Voyager spacecraft, our most distant probes, would need another 250 years just to reach its inner boundary; to reach the other side, they would need about 30,000 years.  The 30-minute show, Encounters in the Milky Way, narrated by Pedro Pascal, guides audiences on a trip through the galaxy across billions of years. For a section about our nascent solar system, the writing team decided “there’s going to be a fly-by” of the Oort cloud, Faherty says. “But what does our Oort cloud look like?”  To find out, the museum consulted astronomers and turned to David Nesvorný, a scientist at the Southwest Research Institute in San Antonio. He provided his model of the millions of particles believed to make up the Oort cloud, based on extensive observational data. “Everybody said, go talk to Nesvorný. He’s got the best model,” says Faherty. And “everybody told us, ‘There’s structure in the model,’ so we were kind of set up to look for stuff,” she says.  The museum’s technical team began using Nesvorný’s model to simulate how the cloud evolved over time. Later, as the team projected versions of the fly-by scene into the dome, with the camera looking back at the Oort cloud, they saw a familiar shape, one that appears in galaxies, Saturn’s rings, and disks around young stars. “We’re flying away from the Oort cloud and out pops this spiral, a spiral shape to the outside of our solar system,” Faherty marveled. “A huge structure, millions and millions of particles.” She emailed Nesvorný to ask for “more particles,” with a render of the scene attached. “We noticed the spiral of course,” she wrote. “And then he writes me back: ‘what are you talking about, a spiral?’”  While fine-tuning a simulation of the Oort cloud, a vast expanse of ice material leftover from the birth of our Sun, the ‘Encounters in the Milky Way’ production team noticed a very clear shape: a structure made of billions of comets and shaped like a spiral-armed galaxy, seen here in a scene from the final Space ShowMore simulations ensued, this time on Pleiades, a powerful NASA supercomputer. In high-performance computer simulations spanning 4.6 billion years, starting from the Solar System’s earliest days, the researchers visualized how the initial icy and rocky ingredients of the Oort cloud began circling the Sun, in the elliptical orbits that are thought to give the cloud its rough disc shape. The simulations also incorporated the physics of the Sun’s gravitational pull, the influences from our Milky Way galaxy, and the movements of the comets themselves.  In each simulation, the spiral persisted. “No one has ever seen the Oort structure like that before,” says Faherty. Nesvorný “has a great quote about this: ‘The math was all there. We just needed the visuals.’”  An illustration of the Kuiper Belt and Oort Cloud in relation to our solar system.As the Oort cloud grew with the early solar system, Nesvorný and his colleagues hypothesize that the galactic tide, or the gravitational force from the Milky Way, disrupted the orbits of some comets. Although the Sun pulls these objects inward, the galaxy’s gravity appears to have twisted part of the Oort cloud outward, forming a spiral tilted roughly 30 degrees from the plane of the solar system. “As the galactic tide acts to decouple bodies from the scattered disk it creates a spiral structure in physical space that is roughly 15,000 astronomical units in length,” or around 1.4 trillion miles from one end to the other, the researchers write in a paper that was published in March in the Astrophysical Journal. “The spiral is long-lived and persists in the inner Oort Cloud to the present time.” “The physics makes sense,” says Faherty. “Scientists, we’re amazing at what we do, but it doesn’t mean we can see everything right away.” It helped that the team behind the space show was primed to look for something, says Carter Emmart, the museum’s director of astrovisualization and director of Encounters. Astronomers had described Nesvorný’s model as having “a structure,” which intrigued the team’s artists. “We were also looking for structure so that it wouldn’t just be sort of like a big blob,” he says. “Other models were also revealing this—but they just hadn’t been visualized.” The museum’s attempts to simulate nature date back to its first habitat dioramas in the early 1900s, which brought visitors to places that hadn’t yet been captured by color photos, TV, or the web. The planetarium, a night sky simulator for generations of would-be scientists and astronauts, got its start after financier Charles Hayden bought the museum its first Zeiss projector. The planetarium now boasts one of the world’s few Zeiss Mark IX systems. Still, these days the star projector is rarely used, Emmart says, now that fulldome laser projectors can turn the old static starfield into 3D video running at 60 frames per second. The Hayden boasts six custom-built Christie projectors, part of what the museum’s former president called “the most advanced planetarium ever attempted.”  In about 1.3 million years, the star system Gliese 710 is set to pass directly through our Oort Cloud, an event visualized in a dramatic scene in ‘Encounters in the Milky Way.’ During its flyby, our systems will swap icy comets, flinging some out on new paths.Emmart recalls how in 1998, when he and other museum leaders were imagining the future of space shows at the Hayden—now with the help of digital projectors and computer graphics—there were questions over how much space they could try to show. “We’re talking about these astronomical data sets we could plot to make the galaxy and the stars,” he says. “Of course, we knew that we would have this star projector, but we really wanted to emphasize astrophysics with this dome video system. I was drawing pictures of this just to get our heads around it and noting the tip of the solar system to the Milky Way is about 60 degrees. And I said, what are we gonna do when we get outside the Milky Way?’ “ThenNeil Degrasse Tyson “goes, ‘whoa, whoa, whoa, Carter, we have enough to do. And just plotting the Milky Way, that’s hard enough.’ And I said, ‘well, when we exit the Milky Way and we don’t see any other galaxies, that’s sort of like astronomy in 1920—we thought maybe the entire universe is just a Milky Way.'” “And that kind of led to a chaotic discussion about, well, what other data sets are there for this?” Emmart adds. The museum worked with astronomer Brent Tully, who had mapped 3500 galaxies beyond the Milky Way, in collaboration with the National Center for Super Computing Applications. “That was it,” he says, “and that seemed fantastical.” By the time the first planetarium show opened at the museum’s new Rose Center for Earth and Space in 2000, Tully had broadened his survey “to an amazing” 30,000 galaxies. The Sloan Digital Sky Survey followed—it’s now at data release 18—with six million galaxies. To build the map of the universe that underlies Encounters, the team also relied on data from the European Space Agency’s space observatory, Gaia. Launched in 2013 and powered down in March of this year, Gaia brought an unprecedented precision to our astronomical map, plotting the distance between 1.7 billion stars. To visualize and render the simulated data, Jon Parker, the museum’s lead technical director, relied on Houdini, a 3D animation tool by Toronto-based SideFX. The goal is immersion, “whether it’s in front of the buffalo downstairs, and seeing what those herds were like before we decimated them, to coming in this room and being teleported to space, with an accurate foundation in the science,” Emmart says. “But the art is important, because the art is the way to the soul.”  The museum, he adds, is “a testament to wonder. And I think wonder is a gateway to inspiration, and inspiration is a gateway to motivation.” Three-D visuals aren’t just powerful tools for communicating science, but increasingly crucial for science itself. Software like OpenSpace, an open source simulation tool developed by the museum, along with the growing availability of high-performance computing, are making it easier to build highly detailed visuals of ever larger and more complex collections of data. “Anytime we look, literally, from a different angle at catalogs of astronomical positions, simulations, or exploring the phase space of a complex data set, there is great potential to discover something new,” says Brian R. Kent, an astronomer and director of science communications at National Radio Astronomy Observatory. “There is also a wealth of astronomics tatical data in archives that can be reanalyzed in new ways, leading to new discoveries.” As the instruments grow in size and sophistication, so does the data, and the challenge of understanding it. Like all scientists, astronomers are facing a deluge of data, ranging from gamma rays and X-rays to ultraviolet, optical, infrared, and radio bands. Our Oort cloud, a shell of icy bodies that surrounds the solar system and extends one-and-a-half light years in every direction, is shown in this scene from ‘Encounters in the Milky Way’ along with the Oort clouds of neighboring stars. The more massive the star, the larger its Oort cloud“New facilities like the Next Generation Very Large Array here at NRAO or the Vera Rubin Observatory and LSST survey project will generate large volumes of data, so astronomers have to get creative with how to analyze it,” says Kent.  More data—and new instruments—will also be needed to prove the spiral itself is actually there: there’s still no known way to even observe the Oort cloud.  Instead, the paper notes, the structure will have to be measured from “detection of a large number of objects” in the radius of the inner Oort cloud or from “thermal emission from small particles in the Oort spiral.”  The Vera C. Rubin Observatory, a powerful, U.S.-funded telescope that recently began operation in Chile, could possibly observe individual icy bodies within the cloud. But researchers expect the telescope will likely discover only dozens of these objects, maybe hundreds, not enough to meaningfully visualize any shapes in the Oort cloud.  For us, here and now, the 1.4 trillion mile-long spiral will remain confined to the inside of a dark dome across the street from Central Park. #how #planetarium #show #discovered #spiral
    WWW.FASTCOMPANY.COM
    How a planetarium show discovered a spiral at the edge of our solar system
    If you’ve ever flown through outer space, at least while watching a documentary or a science fiction film, you’ve seen how artists turn astronomical findings into stunning visuals. But in the process of visualizing data for their latest planetarium show, a production team at New York’s American Museum of Natural History made a surprising discovery of their own: a trillion-and-a-half mile long spiral of material drifting along the edge of our solar system. “So this is a really fun thing that happened,” says Jackie Faherty, the museum’s senior scientist. Last winter, Faherty and her colleagues were beneath the dome of the museum’s Hayden Planetarium, fine-tuning a scene that featured the Oort cloud, the big, thick bubble surrounding our Sun and planets that’s filled with ice and rock and other remnants from the solar system’s infancy. The Oort cloud begins far beyond Neptune, around one and a half light years from the Sun. It has never been directly observed; its existence is inferred from the behavior of long-period comets entering the inner solar system. The cloud is so expansive that the Voyager spacecraft, our most distant probes, would need another 250 years just to reach its inner boundary; to reach the other side, they would need about 30,000 years.  The 30-minute show, Encounters in the Milky Way, narrated by Pedro Pascal, guides audiences on a trip through the galaxy across billions of years. For a section about our nascent solar system, the writing team decided “there’s going to be a fly-by” of the Oort cloud, Faherty says. “But what does our Oort cloud look like?”  To find out, the museum consulted astronomers and turned to David Nesvorný, a scientist at the Southwest Research Institute in San Antonio. He provided his model of the millions of particles believed to make up the Oort cloud, based on extensive observational data. “Everybody said, go talk to Nesvorný. He’s got the best model,” says Faherty. And “everybody told us, ‘There’s structure in the model,’ so we were kind of set up to look for stuff,” she says.  The museum’s technical team began using Nesvorný’s model to simulate how the cloud evolved over time. Later, as the team projected versions of the fly-by scene into the dome, with the camera looking back at the Oort cloud, they saw a familiar shape, one that appears in galaxies, Saturn’s rings, and disks around young stars. “We’re flying away from the Oort cloud and out pops this spiral, a spiral shape to the outside of our solar system,” Faherty marveled. “A huge structure, millions and millions of particles.” She emailed Nesvorný to ask for “more particles,” with a render of the scene attached. “We noticed the spiral of course,” she wrote. “And then he writes me back: ‘what are you talking about, a spiral?’”  While fine-tuning a simulation of the Oort cloud, a vast expanse of ice material leftover from the birth of our Sun, the ‘Encounters in the Milky Way’ production team noticed a very clear shape: a structure made of billions of comets and shaped like a spiral-armed galaxy, seen here in a scene from the final Space Show (curving, dusty S-shape behind the Sun) [Image: © AMNH] More simulations ensued, this time on Pleiades, a powerful NASA supercomputer. In high-performance computer simulations spanning 4.6 billion years, starting from the Solar System’s earliest days, the researchers visualized how the initial icy and rocky ingredients of the Oort cloud began circling the Sun, in the elliptical orbits that are thought to give the cloud its rough disc shape. The simulations also incorporated the physics of the Sun’s gravitational pull, the influences from our Milky Way galaxy, and the movements of the comets themselves.  In each simulation, the spiral persisted. “No one has ever seen the Oort structure like that before,” says Faherty. Nesvorný “has a great quote about this: ‘The math was all there. We just needed the visuals.’”  An illustration of the Kuiper Belt and Oort Cloud in relation to our solar system. [Image: NASA] As the Oort cloud grew with the early solar system, Nesvorný and his colleagues hypothesize that the galactic tide, or the gravitational force from the Milky Way, disrupted the orbits of some comets. Although the Sun pulls these objects inward, the galaxy’s gravity appears to have twisted part of the Oort cloud outward, forming a spiral tilted roughly 30 degrees from the plane of the solar system. “As the galactic tide acts to decouple bodies from the scattered disk it creates a spiral structure in physical space that is roughly 15,000 astronomical units in length,” or around 1.4 trillion miles from one end to the other, the researchers write in a paper that was published in March in the Astrophysical Journal. “The spiral is long-lived and persists in the inner Oort Cloud to the present time.” “The physics makes sense,” says Faherty. “Scientists, we’re amazing at what we do, but it doesn’t mean we can see everything right away.” It helped that the team behind the space show was primed to look for something, says Carter Emmart, the museum’s director of astrovisualization and director of Encounters. Astronomers had described Nesvorný’s model as having “a structure,” which intrigued the team’s artists. “We were also looking for structure so that it wouldn’t just be sort of like a big blob,” he says. “Other models were also revealing this—but they just hadn’t been visualized.” The museum’s attempts to simulate nature date back to its first habitat dioramas in the early 1900s, which brought visitors to places that hadn’t yet been captured by color photos, TV, or the web. The planetarium, a night sky simulator for generations of would-be scientists and astronauts, got its start after financier Charles Hayden bought the museum its first Zeiss projector. The planetarium now boasts one of the world’s few Zeiss Mark IX systems. Still, these days the star projector is rarely used, Emmart says, now that fulldome laser projectors can turn the old static starfield into 3D video running at 60 frames per second. The Hayden boasts six custom-built Christie projectors, part of what the museum’s former president called “the most advanced planetarium ever attempted.”  In about 1.3 million years, the star system Gliese 710 is set to pass directly through our Oort Cloud, an event visualized in a dramatic scene in ‘Encounters in the Milky Way.’ During its flyby, our systems will swap icy comets, flinging some out on new paths. [Image: © AMNH] Emmart recalls how in 1998, when he and other museum leaders were imagining the future of space shows at the Hayden—now with the help of digital projectors and computer graphics—there were questions over how much space they could try to show. “We’re talking about these astronomical data sets we could plot to make the galaxy and the stars,” he says. “Of course, we knew that we would have this star projector, but we really wanted to emphasize astrophysics with this dome video system. I was drawing pictures of this just to get our heads around it and noting the tip of the solar system to the Milky Way is about 60 degrees. And I said, what are we gonna do when we get outside the Milky Way?’ “Then [planetarium’s director] Neil Degrasse Tyson “goes, ‘whoa, whoa, whoa, Carter, we have enough to do. And just plotting the Milky Way, that’s hard enough.’ And I said, ‘well, when we exit the Milky Way and we don’t see any other galaxies, that’s sort of like astronomy in 1920—we thought maybe the entire universe is just a Milky Way.'” “And that kind of led to a chaotic discussion about, well, what other data sets are there for this?” Emmart adds. The museum worked with astronomer Brent Tully, who had mapped 3500 galaxies beyond the Milky Way, in collaboration with the National Center for Super Computing Applications. “That was it,” he says, “and that seemed fantastical.” By the time the first planetarium show opened at the museum’s new Rose Center for Earth and Space in 2000, Tully had broadened his survey “to an amazing” 30,000 galaxies. The Sloan Digital Sky Survey followed—it’s now at data release 18—with six million galaxies. To build the map of the universe that underlies Encounters, the team also relied on data from the European Space Agency’s space observatory, Gaia. Launched in 2013 and powered down in March of this year, Gaia brought an unprecedented precision to our astronomical map, plotting the distance between 1.7 billion stars. To visualize and render the simulated data, Jon Parker, the museum’s lead technical director, relied on Houdini, a 3D animation tool by Toronto-based SideFX. The goal is immersion, “whether it’s in front of the buffalo downstairs, and seeing what those herds were like before we decimated them, to coming in this room and being teleported to space, with an accurate foundation in the science,” Emmart says. “But the art is important, because the art is the way to the soul.”  The museum, he adds, is “a testament to wonder. And I think wonder is a gateway to inspiration, and inspiration is a gateway to motivation.” Three-D visuals aren’t just powerful tools for communicating science, but increasingly crucial for science itself. Software like OpenSpace, an open source simulation tool developed by the museum, along with the growing availability of high-performance computing, are making it easier to build highly detailed visuals of ever larger and more complex collections of data. “Anytime we look, literally, from a different angle at catalogs of astronomical positions, simulations, or exploring the phase space of a complex data set, there is great potential to discover something new,” says Brian R. Kent, an astronomer and director of science communications at National Radio Astronomy Observatory. “There is also a wealth of astronomics tatical data in archives that can be reanalyzed in new ways, leading to new discoveries.” As the instruments grow in size and sophistication, so does the data, and the challenge of understanding it. Like all scientists, astronomers are facing a deluge of data, ranging from gamma rays and X-rays to ultraviolet, optical, infrared, and radio bands. Our Oort cloud (center), a shell of icy bodies that surrounds the solar system and extends one-and-a-half light years in every direction, is shown in this scene from ‘Encounters in the Milky Way’ along with the Oort clouds of neighboring stars. The more massive the star, the larger its Oort cloud [Image: © AMNH ] “New facilities like the Next Generation Very Large Array here at NRAO or the Vera Rubin Observatory and LSST survey project will generate large volumes of data, so astronomers have to get creative with how to analyze it,” says Kent.  More data—and new instruments—will also be needed to prove the spiral itself is actually there: there’s still no known way to even observe the Oort cloud.  Instead, the paper notes, the structure will have to be measured from “detection of a large number of objects” in the radius of the inner Oort cloud or from “thermal emission from small particles in the Oort spiral.”  The Vera C. Rubin Observatory, a powerful, U.S.-funded telescope that recently began operation in Chile, could possibly observe individual icy bodies within the cloud. But researchers expect the telescope will likely discover only dozens of these objects, maybe hundreds, not enough to meaningfully visualize any shapes in the Oort cloud.  For us, here and now, the 1.4 trillion mile-long spiral will remain confined to the inside of a dark dome across the street from Central Park.
    0 Comentários 0 Compartilhamentos 0 Anterior
CGShares https://cgshares.com