• A New Last Airbender Bestiary Art Book Launching September 23

    Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 Preorder Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 Preorder Earlier this year, Nickelodeon announced Avatar is coming back with a new animated series called Avatar: Seven Havens, and there's a new Avatar: The Last Airbender live-action movie on the way, too, making now a good time to brush up on the lore and rich worldbuilding the franchise is known for. One way to do that is with the upcoming Beasts of the Four Nations, a 128-page hardcover bestiary offering in-universe lore and behind-the-scenes details on the wildlife and mythical creatures of both animated series. You can preorder the standard edition foror secure a copy of the Deluxe Edition that includes exclusive cover art and a lithograph print. Preorders for both editions are available , and both ship September 23. Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 Written by John O'Bryan, Beasts of the Four Nations includes illustrations and information on the many fantastical beasts of The Last Airbender's world. Everything from the Air Nomads’ flying bison to Kyoshi Island’s elephant koi and the Earth Kingdom’s singing groundhogs are detailed in the book, along with commentary by Avatar and Legend of Korra creators Bryan Konietzko and Michael Dante DiMartino. The standard edition launches September 23 and is available to preorder for. A Deluxe Edition will also launch on the same day that includes a few extras, which we've detailed below. Preorder Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 The Beasts of the Four Nations Deluxe Edition contains all the contents of the standard edition, but features a few notable upgrades like foil highlights on the cover art and a protective slipcase. The book also comes with an exclusive lithograph print depicting Cai, the cabbage merchant who appears throughout the Avatar series, and his cart pulled by two ostrich horses. You can preorder the Beasts of the Four Nations Deluxe Edition for . Preorder Beasts of the Four Nations Deluxe EditionIf you want to explore more of the Avatar franchise’s visual history, you're in luck, as several more official Avatar: The Last Airbender and The Legend of Korra art books are available, and some are even discounted. There's a giant Avatar: The Last Airbender - The Art of the Animated Series art book that covers all four seasons of the show. It's packed with concept art, design, and production materials, ranging from the very first sketch through to the series finale.The Legend of Korra has a multi-volume art book series available as well Each volume focuses on a specific season of the show and features creator commentaries and exclusive artwork. Standard and Deluxe Editions are available for each volume. The Deluxe Editions include slipcases, lithographs, new covers, and bonus sketches by the show’s creators.Continue Reading at GameSpot
    #new #last #airbender #bestiary #art
    A New Last Airbender Bestiary Art Book Launching September 23
    Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 Preorder Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 Preorder Earlier this year, Nickelodeon announced Avatar is coming back with a new animated series called Avatar: Seven Havens, and there's a new Avatar: The Last Airbender live-action movie on the way, too, making now a good time to brush up on the lore and rich worldbuilding the franchise is known for. One way to do that is with the upcoming Beasts of the Four Nations, a 128-page hardcover bestiary offering in-universe lore and behind-the-scenes details on the wildlife and mythical creatures of both animated series. You can preorder the standard edition foror secure a copy of the Deluxe Edition that includes exclusive cover art and a lithograph print. Preorders for both editions are available , and both ship September 23. Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 Written by John O'Bryan, Beasts of the Four Nations includes illustrations and information on the many fantastical beasts of The Last Airbender's world. Everything from the Air Nomads’ flying bison to Kyoshi Island’s elephant koi and the Earth Kingdom’s singing groundhogs are detailed in the book, along with commentary by Avatar and Legend of Korra creators Bryan Konietzko and Michael Dante DiMartino. The standard edition launches September 23 and is available to preorder for. A Deluxe Edition will also launch on the same day that includes a few extras, which we've detailed below. Preorder Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 The Beasts of the Four Nations Deluxe Edition contains all the contents of the standard edition, but features a few notable upgrades like foil highlights on the cover art and a protective slipcase. The book also comes with an exclusive lithograph print depicting Cai, the cabbage merchant who appears throughout the Avatar series, and his cart pulled by two ostrich horses. You can preorder the Beasts of the Four Nations Deluxe Edition for . Preorder Beasts of the Four Nations Deluxe EditionIf you want to explore more of the Avatar franchise’s visual history, you're in luck, as several more official Avatar: The Last Airbender and The Legend of Korra art books are available, and some are even discounted. There's a giant Avatar: The Last Airbender - The Art of the Animated Series art book that covers all four seasons of the show. It's packed with concept art, design, and production materials, ranging from the very first sketch through to the series finale.The Legend of Korra has a multi-volume art book series available as well Each volume focuses on a specific season of the show and features creator commentaries and exclusive artwork. Standard and Deluxe Editions are available for each volume. The Deluxe Editions include slipcases, lithographs, new covers, and bonus sketches by the show’s creators.Continue Reading at GameSpot #new #last #airbender #bestiary #art
    WWW.GAMESPOT.COM
    A New Last Airbender Bestiary Art Book Launching September 23
    Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra $37.19 (was $40) | Releases September 23 Preorder at Amazon Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra (Deluxe Edition) $80 | Releases September 23 Preorder at Amazon Earlier this year, Nickelodeon announced Avatar is coming back with a new animated series called Avatar: Seven Havens, and there's a new Avatar: The Last Airbender live-action movie on the way, too, making now a good time to brush up on the lore and rich worldbuilding the franchise is known for. One way to do that is with the upcoming Beasts of the Four Nations, a 128-page hardcover bestiary offering in-universe lore and behind-the-scenes details on the wildlife and mythical creatures of both animated series. You can preorder the standard edition for $37.19 (down from $40) or secure a copy of the $80 Deluxe Edition that includes exclusive cover art and a lithograph print. Preorders for both editions are available at Amazon, and both ship September 23. Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra $37.19 (was $40) | Releases September 23 Written by John O'Bryan, Beasts of the Four Nations includes illustrations and information on the many fantastical beasts of The Last Airbender's world. Everything from the Air Nomads’ flying bison to Kyoshi Island’s elephant koi and the Earth Kingdom’s singing groundhogs are detailed in the book, along with commentary by Avatar and Legend of Korra creators Bryan Konietzko and Michael Dante DiMartino. The standard edition launches September 23 and is available to preorder for $37.19 (down from $40) at Amazon. A Deluxe Edition will also launch on the same day that includes a few extras, which we've detailed below. Preorder at Amazon Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra (Deluxe Edition) $80 | Releases September 23 The Beasts of the Four Nations Deluxe Edition contains all the contents of the standard edition, but features a few notable upgrades like foil highlights on the cover art and a protective slipcase. The book also comes with an exclusive lithograph print depicting Cai, the cabbage merchant who appears throughout the Avatar series, and his cart pulled by two ostrich horses. You can preorder the Beasts of the Four Nations Deluxe Edition for $80 at Amazon. Preorder at Amazon Beasts of the Four Nations Deluxe EditionIf you want to explore more of the Avatar franchise’s visual history, you're in luck, as several more official Avatar: The Last Airbender and The Legend of Korra art books are available, and some are even discounted. There's a giant Avatar: The Last Airbender - The Art of the Animated Series art book that covers all four seasons of the show. It's packed with concept art, design, and production materials, ranging from the very first sketch through to the series finale.The Legend of Korra has a multi-volume art book series available as well Each volume focuses on a specific season of the show and features creator commentaries and exclusive artwork. Standard and Deluxe Editions are available for each volume. The Deluxe Editions include slipcases, lithographs, new covers, and bonus sketches by the show’s creators.Continue Reading at GameSpot
    Like
    Love
    Angry
    Sad
    18
    0 Commenti 0 condivisioni
  • Game On With GeForce NOW, the Membership That Keeps on Delivering

    This GFN Thursday rolls out a new reward and games for GeForce NOW members. Whether hunting for hot new releases or rediscovering timeless classics, members can always find more ways to play, games to stream and perks to enjoy.
    Gamers can score major discounts on the titles they’ve been eyeing — perfect for streaming in the cloud — during the Steam Summer Sale, running until Thursday, July 10, at 10 a.m. PT.
    This week also brings unforgettable adventures to the cloud: We Happy Few and Broken Age are part of the five additions to the GeForce NOW library this week.
    The fun doesn’t stop there. A new in-game reward for Elder Scrolls Online is now available for members to claim.
    And SteelSeries has launched a new mobile controller that transforms phones into cloud gaming devices with GeForce NOW. Add it to the roster of on-the-go gaming devices — including the recently launched GeForce NOW app on Steam Deck for seamless 4K streaming.
    Scroll Into Power
    GeForce NOW Premium members receive exclusive 24-hour early access to a new mythical reward in The Elder Scrolls Online — Bethesda’s award-winning role-playing game — before it opens to all members. Sharpen the sword, ready the staff and chase glory across the vast, immersive world of Tamriel.
    Fortune favors the bold.
    Claim the mythical Grand Gold Coast Experience Scrolls reward, a rare item that grants a bonus of 150% Experience Points from all sources for one hour. The scroll’s effect pauses while players are offline and resumes upon return, ensuring every minute counts. Whether tackling dungeon runs, completing epic quests or leveling a new character, the scrolls provide a powerful edge. Claim the reward, harness its power and scroll into the next adventure.
    Members who’ve opted into the GeForce NOW Rewards program can check their emails for redemption instructions. The offer runs through Saturday, July 26, while supplies last. Don’t miss this opportunity to become a legend in Tamriel.
    Steam Up Summer
    The Steam Summer Sale is in full swing. Snag games at discounted prices and stream them instantly from the cloud — no downloads, no waiting, just pure gaming bliss.
    Treat yourself.
    Check out the “Steam Summer Sale” row in the GeForce NOW app to find deals on the next adventure. With GeForce NOW, gaming favorites are always just a click away.
    While picking up discounted games, don’t miss the chance to get a GeForce NOW six-month Performance membership at 40% off. This is also the last opportunity to take advantage of the Performance Day Pass sale, ending Friday, June 27 — which lets gamers access cloud gaming for 24 hours — before diving into the 6-month Performance membership.
    Find Adventure
    Two distinct worlds — where secrets simmer and imagination runs wild — are streaming onto the cloud this week.
    Keep calm and blend in.
    Step into the surreal, retro-futuristic streets of We Happy Few, where a society obsessed with happiness hides its secrets behind a mask of forced cheer and a haze of “Joy.” This darkly whimsical adventure invites players to blend in, break out and uncover the truth lurking beneath the surface of Wellington Wells.
    Two worlds, one wild destiny.
    Broken Age spins a charming, hand-painted tale of two teenagers leading parallel lives in worlds at once strange and familiar. One of the teens yearns to escape a stifling spaceship, and the other is destined to challenge ancient traditions. With witty dialogue and heartfelt moments, Broken Age is a storybook come to life, brimming with quirky characters and clever puzzles.
    Each of these unforgettable adventures brings its own flavor — be it dark satire, whimsical wonder or pulse-pounding suspense — offering a taste of gaming at its imaginative peaks. Stream these captivating worlds straight from the cloud and enjoy seamless gameplay, no downloads or high-end hardware required.
    An Ultimate Controller
    Elevated gaming.
    Get ready for the SteelSeries Nimbus Cloud, a new dual-mode cloud controller. When paired with GeForce NOW, this new controller reaches new heights.
    Designed for versatility and comfort, and crafted specifically for cloud gaming, the SteelSeries Nimbus Cloud effortlessly shifts from a mobile device controller to a full-sized wireless controller, delivering top-notch performance and broad compatibility across devices.
    The Nimbus Cloud enables gamers to play wherever they are, as it easily adapts to fit iPhones and Android phones. Or collapse and connect the controller via Bluetooth to a gaming rig or smart TV. Transform any space into a personal gaming station with GeForce NOW and the Nimbus Cloud, part of the list of recommended products for an elevated cloud gaming experience.
    Gaming Never Sleeps
    “System Shock 2” — now with 100% more existential dread.
    System Shock 2: 25th Anniversary Remaster is an overhaul of the acclaimed sci-fi horror classic, rebuilt by Nightdive Studios with enhanced visuals, refined gameplay and features such as cross-play co-op multiplayer. Face the sinister AI SHODAN and her mutant army aboard the starship Von Braun as a cybernetically enhanced soldier with upgradable skills, powerful weapons and psionic abilities. Stream the title from the cloud with GeForce NOW for ultimate flexibility and performance.
    Look for the following games available to stream in the cloud this week:

    System Shock 2: 25th Anniversary RemasterBroken AgeEasy Red 2Sandwich SimulatorWe Happy FewWhat are you planning to play this weekend? Let us know on X or in the comments below.

    The official GFN summer bucket list
    Play anywhere Stream on every screen you own Finally crush that backlog Skip every single download bar
    Drop the emoji for the one you’re tackling right now
    — NVIDIA GeForce NOWJune 25, 2025
    #game #with #geforce #now #membership
    Game On With GeForce NOW, the Membership That Keeps on Delivering
    This GFN Thursday rolls out a new reward and games for GeForce NOW members. Whether hunting for hot new releases or rediscovering timeless classics, members can always find more ways to play, games to stream and perks to enjoy. Gamers can score major discounts on the titles they’ve been eyeing — perfect for streaming in the cloud — during the Steam Summer Sale, running until Thursday, July 10, at 10 a.m. PT. This week also brings unforgettable adventures to the cloud: We Happy Few and Broken Age are part of the five additions to the GeForce NOW library this week. The fun doesn’t stop there. A new in-game reward for Elder Scrolls Online is now available for members to claim. And SteelSeries has launched a new mobile controller that transforms phones into cloud gaming devices with GeForce NOW. Add it to the roster of on-the-go gaming devices — including the recently launched GeForce NOW app on Steam Deck for seamless 4K streaming. Scroll Into Power GeForce NOW Premium members receive exclusive 24-hour early access to a new mythical reward in The Elder Scrolls Online — Bethesda’s award-winning role-playing game — before it opens to all members. Sharpen the sword, ready the staff and chase glory across the vast, immersive world of Tamriel. Fortune favors the bold. Claim the mythical Grand Gold Coast Experience Scrolls reward, a rare item that grants a bonus of 150% Experience Points from all sources for one hour. The scroll’s effect pauses while players are offline and resumes upon return, ensuring every minute counts. Whether tackling dungeon runs, completing epic quests or leveling a new character, the scrolls provide a powerful edge. Claim the reward, harness its power and scroll into the next adventure. Members who’ve opted into the GeForce NOW Rewards program can check their emails for redemption instructions. The offer runs through Saturday, July 26, while supplies last. Don’t miss this opportunity to become a legend in Tamriel. Steam Up Summer The Steam Summer Sale is in full swing. Snag games at discounted prices and stream them instantly from the cloud — no downloads, no waiting, just pure gaming bliss. Treat yourself. Check out the “Steam Summer Sale” row in the GeForce NOW app to find deals on the next adventure. With GeForce NOW, gaming favorites are always just a click away. While picking up discounted games, don’t miss the chance to get a GeForce NOW six-month Performance membership at 40% off. This is also the last opportunity to take advantage of the Performance Day Pass sale, ending Friday, June 27 — which lets gamers access cloud gaming for 24 hours — before diving into the 6-month Performance membership. Find Adventure Two distinct worlds — where secrets simmer and imagination runs wild — are streaming onto the cloud this week. Keep calm and blend in. Step into the surreal, retro-futuristic streets of We Happy Few, where a society obsessed with happiness hides its secrets behind a mask of forced cheer and a haze of “Joy.” This darkly whimsical adventure invites players to blend in, break out and uncover the truth lurking beneath the surface of Wellington Wells. Two worlds, one wild destiny. Broken Age spins a charming, hand-painted tale of two teenagers leading parallel lives in worlds at once strange and familiar. One of the teens yearns to escape a stifling spaceship, and the other is destined to challenge ancient traditions. With witty dialogue and heartfelt moments, Broken Age is a storybook come to life, brimming with quirky characters and clever puzzles. Each of these unforgettable adventures brings its own flavor — be it dark satire, whimsical wonder or pulse-pounding suspense — offering a taste of gaming at its imaginative peaks. Stream these captivating worlds straight from the cloud and enjoy seamless gameplay, no downloads or high-end hardware required. An Ultimate Controller Elevated gaming. Get ready for the SteelSeries Nimbus Cloud, a new dual-mode cloud controller. When paired with GeForce NOW, this new controller reaches new heights. Designed for versatility and comfort, and crafted specifically for cloud gaming, the SteelSeries Nimbus Cloud effortlessly shifts from a mobile device controller to a full-sized wireless controller, delivering top-notch performance and broad compatibility across devices. The Nimbus Cloud enables gamers to play wherever they are, as it easily adapts to fit iPhones and Android phones. Or collapse and connect the controller via Bluetooth to a gaming rig or smart TV. Transform any space into a personal gaming station with GeForce NOW and the Nimbus Cloud, part of the list of recommended products for an elevated cloud gaming experience. Gaming Never Sleeps “System Shock 2” — now with 100% more existential dread. System Shock 2: 25th Anniversary Remaster is an overhaul of the acclaimed sci-fi horror classic, rebuilt by Nightdive Studios with enhanced visuals, refined gameplay and features such as cross-play co-op multiplayer. Face the sinister AI SHODAN and her mutant army aboard the starship Von Braun as a cybernetically enhanced soldier with upgradable skills, powerful weapons and psionic abilities. Stream the title from the cloud with GeForce NOW for ultimate flexibility and performance. Look for the following games available to stream in the cloud this week: System Shock 2: 25th Anniversary RemasterBroken AgeEasy Red 2Sandwich SimulatorWe Happy FewWhat are you planning to play this weekend? Let us know on X or in the comments below. The official GFN summer bucket list Play anywhere Stream on every screen you own Finally crush that backlog Skip every single download bar Drop the emoji for the one you’re tackling right now — NVIDIA GeForce NOWJune 25, 2025 #game #with #geforce #now #membership
    BLOGS.NVIDIA.COM
    Game On With GeForce NOW, the Membership That Keeps on Delivering
    This GFN Thursday rolls out a new reward and games for GeForce NOW members. Whether hunting for hot new releases or rediscovering timeless classics, members can always find more ways to play, games to stream and perks to enjoy. Gamers can score major discounts on the titles they’ve been eyeing — perfect for streaming in the cloud — during the Steam Summer Sale, running until Thursday, July 10, at 10 a.m. PT. This week also brings unforgettable adventures to the cloud: We Happy Few and Broken Age are part of the five additions to the GeForce NOW library this week. The fun doesn’t stop there. A new in-game reward for Elder Scrolls Online is now available for members to claim. And SteelSeries has launched a new mobile controller that transforms phones into cloud gaming devices with GeForce NOW. Add it to the roster of on-the-go gaming devices — including the recently launched GeForce NOW app on Steam Deck for seamless 4K streaming. Scroll Into Power GeForce NOW Premium members receive exclusive 24-hour early access to a new mythical reward in The Elder Scrolls Online — Bethesda’s award-winning role-playing game — before it opens to all members. Sharpen the sword, ready the staff and chase glory across the vast, immersive world of Tamriel. Fortune favors the bold. Claim the mythical Grand Gold Coast Experience Scrolls reward, a rare item that grants a bonus of 150% Experience Points from all sources for one hour. The scroll’s effect pauses while players are offline and resumes upon return, ensuring every minute counts. Whether tackling dungeon runs, completing epic quests or leveling a new character, the scrolls provide a powerful edge. Claim the reward, harness its power and scroll into the next adventure. Members who’ve opted into the GeForce NOW Rewards program can check their emails for redemption instructions. The offer runs through Saturday, July 26, while supplies last. Don’t miss this opportunity to become a legend in Tamriel. Steam Up Summer The Steam Summer Sale is in full swing. Snag games at discounted prices and stream them instantly from the cloud — no downloads, no waiting, just pure gaming bliss. Treat yourself. Check out the “Steam Summer Sale” row in the GeForce NOW app to find deals on the next adventure. With GeForce NOW, gaming favorites are always just a click away. While picking up discounted games, don’t miss the chance to get a GeForce NOW six-month Performance membership at 40% off. This is also the last opportunity to take advantage of the Performance Day Pass sale, ending Friday, June 27 — which lets gamers access cloud gaming for 24 hours — before diving into the 6-month Performance membership. Find Adventure Two distinct worlds — where secrets simmer and imagination runs wild — are streaming onto the cloud this week. Keep calm and blend in (or else). Step into the surreal, retro-futuristic streets of We Happy Few, where a society obsessed with happiness hides its secrets behind a mask of forced cheer and a haze of “Joy.” This darkly whimsical adventure invites players to blend in, break out and uncover the truth lurking beneath the surface of Wellington Wells. Two worlds, one wild destiny. Broken Age spins a charming, hand-painted tale of two teenagers leading parallel lives in worlds at once strange and familiar. One of the teens yearns to escape a stifling spaceship, and the other is destined to challenge ancient traditions. With witty dialogue and heartfelt moments, Broken Age is a storybook come to life, brimming with quirky characters and clever puzzles. Each of these unforgettable adventures brings its own flavor — be it dark satire, whimsical wonder or pulse-pounding suspense — offering a taste of gaming at its imaginative peaks. Stream these captivating worlds straight from the cloud and enjoy seamless gameplay, no downloads or high-end hardware required. An Ultimate Controller Elevated gaming. Get ready for the SteelSeries Nimbus Cloud, a new dual-mode cloud controller. When paired with GeForce NOW, this new controller reaches new heights. Designed for versatility and comfort, and crafted specifically for cloud gaming, the SteelSeries Nimbus Cloud effortlessly shifts from a mobile device controller to a full-sized wireless controller, delivering top-notch performance and broad compatibility across devices. The Nimbus Cloud enables gamers to play wherever they are, as it easily adapts to fit iPhones and Android phones. Or collapse and connect the controller via Bluetooth to a gaming rig or smart TV. Transform any space into a personal gaming station with GeForce NOW and the Nimbus Cloud, part of the list of recommended products for an elevated cloud gaming experience. Gaming Never Sleeps “System Shock 2” — now with 100% more existential dread. System Shock 2: 25th Anniversary Remaster is an overhaul of the acclaimed sci-fi horror classic, rebuilt by Nightdive Studios with enhanced visuals, refined gameplay and features such as cross-play co-op multiplayer. Face the sinister AI SHODAN and her mutant army aboard the starship Von Braun as a cybernetically enhanced soldier with upgradable skills, powerful weapons and psionic abilities. Stream the title from the cloud with GeForce NOW for ultimate flexibility and performance. Look for the following games available to stream in the cloud this week: System Shock 2: 25th Anniversary Remaster (New release on Steam, June 26) Broken Age (Steam) Easy Red 2 (Steam) Sandwich Simulator (Steam) We Happy Few (Steam) What are you planning to play this weekend? Let us know on X or in the comments below. The official GFN summer bucket list Play anywhere Stream on every screen you own Finally crush that backlog Skip every single download bar Drop the emoji for the one you’re tackling right now — NVIDIA GeForce NOW (@NVIDIAGFN) June 25, 2025
    0 Commenti 0 condivisioni
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Commenti 0 condivisioni
  • PlayStation State of Play June 2025: Everything announced

    PlayStation State of Play June 2025: Everything announced
    Marvel at a night of sick reveals from Sony.

    Image credit: Eurogamer

    News

    by Connor Makar
    Contributor

    Published on June 4, 2025

    Another PlayStation State of Play took place tonight, giving as a peak at what's to come on the PS5 and PS5 Pro. The event was stacked with new trailers, including some reveals of entirely new games we've not seen before. Sony even remembered the PS VR2 exists!
    Whether you're here to double check what you've just seen, or want a catch-up on all the reveals you missed, this article will take you through everything shown off at the June State of Play event. Enjoy!

    Lumines Arise
    The show kicked off with a reveal trailer for Lumines Arive, a colourful and very musical way to start the event. It comes from the developers behind Tetris Effect, which was a banger. It's coming this Fall, and you can wishlist in now.

    Watch the new Lumines Arise trailer here!Watch on YouTube
    Pragmata
    Next up is Pragmata, the mysterious Capcom game that we've seen precious little of since its first reveal years ago. This time we got gameplay, loads of third person action goodness and a lovely sci-fi setting. Also, Diana is adorable. The game is set to come out at some point in 2026 to the PS5.

    Watch the Pragmata trailer here!Watch on YouTube
    Romeo is a Dead Man
    Next up is a grisly and cartoonish action game called Romeo is a Dead Man. It's coming from illustrious and brilliantly weird developer Grasshopper Manufacture, with both Suda 51 and Ren Yamazaki working on it. It'll be coming out in 2026... Maybe.

    Watch the Romeo is a Dead Man trailer here!Watch on YouTube
    Silent Hill f
    Now for something totally different with Silent Hill f, a third-person horror title that's certainly ramped the creepy factor up quite significantly. In it we see plenty of horrific mannequins with bloody knives in hand. It's set to come out on the 25th September, 2025 on the PS5.

    Watch the Silent Hill f trailer here!Watch on YouTube
    Bloodstained: The Scarlet Engagement
    We now go to a lush looking 2D side scroller called Bloodstained: The Scarlet Engagement, which has just been revealed. A sequel to a beloved indie and spiritual successor to classic Castlevania games, it'll be coming out in 2026 on the PS5.

    Check out the Bloodstained trailer here!Watch on YouTube
    Digimon Story Time Stranger
    Up next a gameplay trailer for Digimon Story Time Stranger, which looks absolutely killer. In it we see a variety of fan-favourite Digimon and a variety of characters seemingly in the thick of some nefarious medling between the physical and digital worlds. It'll be coming on the 3rd November 2025.

    Love me some Digimon. Watch the trailer here!Watch on YouTube
    Final Fantasy Tactics - The Ivalice Chronicles
    Now for a blast from the past with Final Fantasy Tactics: The Ivalice Chronicles. It'll include two versions of the original game, a classic version which is a faithful recreation, and an enhanced version with improved graphics and more. It'll be coming to the PS5 and PS4 on the 30th September.

    FF Tactics is back!Watch on YouTube
    Baby Steps

    Next up is Baby Steps, which is a fun exploration game where you've gotta control your leg movements as you climb a mountain. Now we finally have a release date! It comes to PS5 on the 8th September.

    This is a must-watch for those who like a chuckle.Watch on YouTube
    Hirogami
    Now for something whimsical. Hirogami allows you to transform into a variety of creatures through the power of folding. It's coming to PS5 on the 3rd September.

    A neat reveal! Check out the new trailerWatch on YouTube
    Everybody's Golf Hot Shots
    To the green we go with Everybody's Golf Hot Shots, with courses in 10 different regions around the world, each with weather effects and night time variants. Those who pre-order get Pacman - rad! It comes to the PS5 on the 5th September.

    Check out some golf right here!Watch on YouTube
    Ninja Gaiden Ragebound
    Here's a retro throwback. Ninja Gaiden Ragebound got a new gameplay trailer, and is coming on the 21st July on PS5 and PS4.

    A big win for lovers of the classics. Watch the trailer here!Watch on YouTube
    Cairn
    The Game Bakers are back at it again with Cairn. A new gameplay trailer just dropped, showing a perilous and scenic climb up a massive mountain. Like all Game Bakers titles, the trailer has a rad music track. It's coming to PS5 on the 5th November 2025, but you can download a demo today!

    A moving trailer for you to watch right here!Watch on YouTube
    Mortal Kombat Legacy Kollection
    Get over here! A reveal trailer for the Mortal Kombat Legacy Kollection just dropped, and with it a blend of retro titles you'll be able to play with this new collection as well as some lovely retro arcade footage. It contains Mortal Kombat, Mortal Kombat 2, Mortal Kombat 3, Ultimate Mortal Kombat 3, Mortal Kombat 4, and more. It's coming to the PS5 and PS4 in 2025.

    Moooooortal Kooooooombat Traaaaaaailer.Watch on YouTube
    Metal Gear Solid Delta Snake Eater
    Here's a big one! We got a new gameplay trailer for Metal Gear Solid Delta Snake Eater. Loads of memorable locations, gadgets, and moments on display in the stunning new engine.

    This isn't a dream - it's Snake Eater!Watch on YouTube
    Nioh 3
    A rad new reveal comes via Nioh 3, which got a fantastic new gameplay trailer at the State of Play. It releases in early 2026 on the PS5. A demo is available right now too, so give it a try!

    If you love some challenging and bloody action, you should watch this Nioh 3 trailer!Watch on YouTube
    Thief VR: Legacy of Shadow
    Finally some PSVR love! A new reveal trailer for Thief VR: Legacy of Shadow was shown off, giving us a glimpse of loads of stealth, robbery, and action. It's coming to VR in 2025.

    Some VR rep at the State of Play!Watch on YouTube
    Tides of Tomorrow
    Another new game got a trailer, this time Tides of Tomorrow. A very bright first-person action game set in a flooded dystopian world. It's coming 24th February, 2026 on the PS5.

    Loving the look of this trailer, give it a watch!Watch on YouTube
    Astro Bot new update
    An update to Astro Bot is up next, containing five new challenge levels, new guest bots, and an announcement that the Astro Bot DualSense controller is coming back this year... With a twist!

    Here's a look at what's coming to Astro Bot!Watch on YouTube
    Sea of Remnants
    Pirate time! A reveal trailer for Sea of Remnants was just shown off, giving us a peak of sailing, navigating various islands, and facing the mythical creatures of the deep. It'll be coming in 2026, you can wishlist it now.

    Grab your hat annd cutlass sailor.Watch on YouTube
    Sword of the Sea
    Here comes something beautiful. Sword of the Sea got a new trailer, with plenty of fantastic locations presented in vibrant tones throughout. It's coming on 19th August to the PS5, available on PlayStation Plus.

    Now this is my kind of vibe, watch it and find out why!Watch on YouTube
    FBC Firebreak
    Love co-op? FBC Firebreak is a spin-off to Remedy's Control series, and offers plenty of PvE action for those with a taste for the paranormal. It's coming to PS5 and the PlayStation Game Catalogue on 17th June, 2025.

    Here's the portion of the State of Play featuring FBC: Firebreak!Watch on YouTube
    New PS Plus games coming this summer
    A bunch of new games are coming to PlayStation Plus this Summer in a variety of forms. This includes:

    The original Deux Ex coming to PlayStation Plus Classics Catalogue on 17th June, 2025.
    Twisted Metal 3 & 4 coming to PlayStation Plus Classics Catalogue on 15th July, 2025.
    Resident Evil 2 & 3 coming to PlayStation Plus Classics Catalogue this Summer.
    Myst and Riven coming later this month as part of Days of Play

    First Light 007
    A massive reveal for the show! James Bond took the stage with First Light 007, our first look at the game. It kicks off with some introductory cinematics, but there are snippets of gameplay showing loads of spy action. It's coming in 2026 to the PS5.

    It's Bond, James Bond... TrailerWatch on YouTube
    Ghost of Yotei
    A short one here. Ghost of Yotei is getting a gameplay deep dive in July.

    Stay tuned for more info!Watch on YouTube
    Marvel Tokon Fighting Souls
    Trying not to freak out over here. Arc System Works just revealed Marvel Tokon Fighting Souls. A 3v3 fighting game featuring plenty of beloved Marvel characters. It's coming to PS5 and PC in 2026.

    It's 11PM at night and I'm trying not to scream. Mahvel Baby!Watch on YouTube
    #playstation #state #play #june #everything
    PlayStation State of Play June 2025: Everything announced
    PlayStation State of Play June 2025: Everything announced Marvel at a night of sick reveals from Sony. Image credit: Eurogamer News by Connor Makar Contributor Published on June 4, 2025 Another PlayStation State of Play took place tonight, giving as a peak at what's to come on the PS5 and PS5 Pro. The event was stacked with new trailers, including some reveals of entirely new games we've not seen before. Sony even remembered the PS VR2 exists! Whether you're here to double check what you've just seen, or want a catch-up on all the reveals you missed, this article will take you through everything shown off at the June State of Play event. Enjoy! Lumines Arise The show kicked off with a reveal trailer for Lumines Arive, a colourful and very musical way to start the event. It comes from the developers behind Tetris Effect, which was a banger. It's coming this Fall, and you can wishlist in now. Watch the new Lumines Arise trailer here!Watch on YouTube Pragmata Next up is Pragmata, the mysterious Capcom game that we've seen precious little of since its first reveal years ago. This time we got gameplay, loads of third person action goodness and a lovely sci-fi setting. Also, Diana is adorable. The game is set to come out at some point in 2026 to the PS5. Watch the Pragmata trailer here!Watch on YouTube Romeo is a Dead Man Next up is a grisly and cartoonish action game called Romeo is a Dead Man. It's coming from illustrious and brilliantly weird developer Grasshopper Manufacture, with both Suda 51 and Ren Yamazaki working on it. It'll be coming out in 2026... Maybe. Watch the Romeo is a Dead Man trailer here!Watch on YouTube Silent Hill f Now for something totally different with Silent Hill f, a third-person horror title that's certainly ramped the creepy factor up quite significantly. In it we see plenty of horrific mannequins with bloody knives in hand. It's set to come out on the 25th September, 2025 on the PS5. Watch the Silent Hill f trailer here!Watch on YouTube Bloodstained: The Scarlet Engagement We now go to a lush looking 2D side scroller called Bloodstained: The Scarlet Engagement, which has just been revealed. A sequel to a beloved indie and spiritual successor to classic Castlevania games, it'll be coming out in 2026 on the PS5. Check out the Bloodstained trailer here!Watch on YouTube Digimon Story Time Stranger Up next a gameplay trailer for Digimon Story Time Stranger, which looks absolutely killer. In it we see a variety of fan-favourite Digimon and a variety of characters seemingly in the thick of some nefarious medling between the physical and digital worlds. It'll be coming on the 3rd November 2025. Love me some Digimon. Watch the trailer here!Watch on YouTube Final Fantasy Tactics - The Ivalice Chronicles Now for a blast from the past with Final Fantasy Tactics: The Ivalice Chronicles. It'll include two versions of the original game, a classic version which is a faithful recreation, and an enhanced version with improved graphics and more. It'll be coming to the PS5 and PS4 on the 30th September. FF Tactics is back!Watch on YouTube Baby Steps Next up is Baby Steps, which is a fun exploration game where you've gotta control your leg movements as you climb a mountain. Now we finally have a release date! It comes to PS5 on the 8th September. This is a must-watch for those who like a chuckle.Watch on YouTube Hirogami Now for something whimsical. Hirogami allows you to transform into a variety of creatures through the power of folding. It's coming to PS5 on the 3rd September. A neat reveal! Check out the new trailerWatch on YouTube Everybody's Golf Hot Shots To the green we go with Everybody's Golf Hot Shots, with courses in 10 different regions around the world, each with weather effects and night time variants. Those who pre-order get Pacman - rad! It comes to the PS5 on the 5th September. Check out some golf right here!Watch on YouTube Ninja Gaiden Ragebound Here's a retro throwback. Ninja Gaiden Ragebound got a new gameplay trailer, and is coming on the 21st July on PS5 and PS4. A big win for lovers of the classics. Watch the trailer here!Watch on YouTube Cairn The Game Bakers are back at it again with Cairn. A new gameplay trailer just dropped, showing a perilous and scenic climb up a massive mountain. Like all Game Bakers titles, the trailer has a rad music track. It's coming to PS5 on the 5th November 2025, but you can download a demo today! A moving trailer for you to watch right here!Watch on YouTube Mortal Kombat Legacy Kollection Get over here! A reveal trailer for the Mortal Kombat Legacy Kollection just dropped, and with it a blend of retro titles you'll be able to play with this new collection as well as some lovely retro arcade footage. It contains Mortal Kombat, Mortal Kombat 2, Mortal Kombat 3, Ultimate Mortal Kombat 3, Mortal Kombat 4, and more. It's coming to the PS5 and PS4 in 2025. Moooooortal Kooooooombat Traaaaaaailer.Watch on YouTube Metal Gear Solid Delta Snake Eater Here's a big one! We got a new gameplay trailer for Metal Gear Solid Delta Snake Eater. Loads of memorable locations, gadgets, and moments on display in the stunning new engine. This isn't a dream - it's Snake Eater!Watch on YouTube Nioh 3 A rad new reveal comes via Nioh 3, which got a fantastic new gameplay trailer at the State of Play. It releases in early 2026 on the PS5. A demo is available right now too, so give it a try! If you love some challenging and bloody action, you should watch this Nioh 3 trailer!Watch on YouTube Thief VR: Legacy of Shadow Finally some PSVR love! A new reveal trailer for Thief VR: Legacy of Shadow was shown off, giving us a glimpse of loads of stealth, robbery, and action. It's coming to VR in 2025. Some VR rep at the State of Play!Watch on YouTube Tides of Tomorrow Another new game got a trailer, this time Tides of Tomorrow. A very bright first-person action game set in a flooded dystopian world. It's coming 24th February, 2026 on the PS5. Loving the look of this trailer, give it a watch!Watch on YouTube Astro Bot new update An update to Astro Bot is up next, containing five new challenge levels, new guest bots, and an announcement that the Astro Bot DualSense controller is coming back this year... With a twist! Here's a look at what's coming to Astro Bot!Watch on YouTube Sea of Remnants Pirate time! A reveal trailer for Sea of Remnants was just shown off, giving us a peak of sailing, navigating various islands, and facing the mythical creatures of the deep. It'll be coming in 2026, you can wishlist it now. Grab your hat annd cutlass sailor.Watch on YouTube Sword of the Sea Here comes something beautiful. Sword of the Sea got a new trailer, with plenty of fantastic locations presented in vibrant tones throughout. It's coming on 19th August to the PS5, available on PlayStation Plus. Now this is my kind of vibe, watch it and find out why!Watch on YouTube FBC Firebreak Love co-op? FBC Firebreak is a spin-off to Remedy's Control series, and offers plenty of PvE action for those with a taste for the paranormal. It's coming to PS5 and the PlayStation Game Catalogue on 17th June, 2025. Here's the portion of the State of Play featuring FBC: Firebreak!Watch on YouTube New PS Plus games coming this summer A bunch of new games are coming to PlayStation Plus this Summer in a variety of forms. This includes: The original Deux Ex coming to PlayStation Plus Classics Catalogue on 17th June, 2025. Twisted Metal 3 & 4 coming to PlayStation Plus Classics Catalogue on 15th July, 2025. Resident Evil 2 & 3 coming to PlayStation Plus Classics Catalogue this Summer. Myst and Riven coming later this month as part of Days of Play First Light 007 A massive reveal for the show! James Bond took the stage with First Light 007, our first look at the game. It kicks off with some introductory cinematics, but there are snippets of gameplay showing loads of spy action. It's coming in 2026 to the PS5. It's Bond, James Bond... TrailerWatch on YouTube Ghost of Yotei A short one here. Ghost of Yotei is getting a gameplay deep dive in July. Stay tuned for more info!Watch on YouTube Marvel Tokon Fighting Souls Trying not to freak out over here. Arc System Works just revealed Marvel Tokon Fighting Souls. A 3v3 fighting game featuring plenty of beloved Marvel characters. It's coming to PS5 and PC in 2026. It's 11PM at night and I'm trying not to scream. Mahvel Baby!Watch on YouTube #playstation #state #play #june #everything
    WWW.EUROGAMER.NET
    PlayStation State of Play June 2025: Everything announced
    PlayStation State of Play June 2025: Everything announced Marvel at a night of sick reveals from Sony. Image credit: Eurogamer News by Connor Makar Contributor Published on June 4, 2025 Another PlayStation State of Play took place tonight, giving as a peak at what's to come on the PS5 and PS5 Pro. The event was stacked with new trailers, including some reveals of entirely new games we've not seen before. Sony even remembered the PS VR2 exists! Whether you're here to double check what you've just seen, or want a catch-up on all the reveals you missed, this article will take you through everything shown off at the June State of Play event. Enjoy! Lumines Arise The show kicked off with a reveal trailer for Lumines Arive, a colourful and very musical way to start the event. It comes from the developers behind Tetris Effect, which was a banger. It's coming this Fall, and you can wishlist in now. Watch the new Lumines Arise trailer here!Watch on YouTube Pragmata Next up is Pragmata, the mysterious Capcom game that we've seen precious little of since its first reveal years ago. This time we got gameplay, loads of third person action goodness and a lovely sci-fi setting. Also, Diana is adorable. The game is set to come out at some point in 2026 to the PS5. Watch the Pragmata trailer here!Watch on YouTube Romeo is a Dead Man Next up is a grisly and cartoonish action game called Romeo is a Dead Man. It's coming from illustrious and brilliantly weird developer Grasshopper Manufacture, with both Suda 51 and Ren Yamazaki working on it. It'll be coming out in 2026... Maybe. Watch the Romeo is a Dead Man trailer here!Watch on YouTube Silent Hill f Now for something totally different with Silent Hill f, a third-person horror title that's certainly ramped the creepy factor up quite significantly. In it we see plenty of horrific mannequins with bloody knives in hand. It's set to come out on the 25th September, 2025 on the PS5. Watch the Silent Hill f trailer here!Watch on YouTube Bloodstained: The Scarlet Engagement We now go to a lush looking 2D side scroller called Bloodstained: The Scarlet Engagement, which has just been revealed. A sequel to a beloved indie and spiritual successor to classic Castlevania games, it'll be coming out in 2026 on the PS5. Check out the Bloodstained trailer here!Watch on YouTube Digimon Story Time Stranger Up next a gameplay trailer for Digimon Story Time Stranger, which looks absolutely killer. In it we see a variety of fan-favourite Digimon and a variety of characters seemingly in the thick of some nefarious medling between the physical and digital worlds. It'll be coming on the 3rd November 2025. Love me some Digimon. Watch the trailer here!Watch on YouTube Final Fantasy Tactics - The Ivalice Chronicles Now for a blast from the past with Final Fantasy Tactics: The Ivalice Chronicles. It'll include two versions of the original game, a classic version which is a faithful recreation, and an enhanced version with improved graphics and more. It'll be coming to the PS5 and PS4 on the 30th September. FF Tactics is back!Watch on YouTube Baby Steps Next up is Baby Steps, which is a fun exploration game where you've gotta control your leg movements as you climb a mountain. Now we finally have a release date! It comes to PS5 on the 8th September. This is a must-watch for those who like a chuckle.Watch on YouTube Hirogami Now for something whimsical. Hirogami allows you to transform into a variety of creatures through the power of folding. It's coming to PS5 on the 3rd September. A neat reveal! Check out the new trailerWatch on YouTube Everybody's Golf Hot Shots To the green we go with Everybody's Golf Hot Shots, with courses in 10 different regions around the world, each with weather effects and night time variants. Those who pre-order get Pacman - rad! It comes to the PS5 on the 5th September. Check out some golf right here!Watch on YouTube Ninja Gaiden Ragebound Here's a retro throwback. Ninja Gaiden Ragebound got a new gameplay trailer, and is coming on the 21st July on PS5 and PS4. A big win for lovers of the classics. Watch the trailer here!Watch on YouTube Cairn The Game Bakers are back at it again with Cairn. A new gameplay trailer just dropped, showing a perilous and scenic climb up a massive mountain. Like all Game Bakers titles, the trailer has a rad music track. It's coming to PS5 on the 5th November 2025, but you can download a demo today! A moving trailer for you to watch right here!Watch on YouTube Mortal Kombat Legacy Kollection Get over here! A reveal trailer for the Mortal Kombat Legacy Kollection just dropped, and with it a blend of retro titles you'll be able to play with this new collection as well as some lovely retro arcade footage. It contains Mortal Kombat, Mortal Kombat 2, Mortal Kombat 3, Ultimate Mortal Kombat 3, Mortal Kombat 4, and more. It's coming to the PS5 and PS4 in 2025. Moooooortal Kooooooombat Traaaaaaailer.Watch on YouTube Metal Gear Solid Delta Snake Eater Here's a big one! We got a new gameplay trailer for Metal Gear Solid Delta Snake Eater. Loads of memorable locations, gadgets, and moments on display in the stunning new engine. This isn't a dream - it's Snake Eater!Watch on YouTube Nioh 3 A rad new reveal comes via Nioh 3, which got a fantastic new gameplay trailer at the State of Play. It releases in early 2026 on the PS5. A demo is available right now too, so give it a try! If you love some challenging and bloody action, you should watch this Nioh 3 trailer!Watch on YouTube Thief VR: Legacy of Shadow Finally some PSVR love! A new reveal trailer for Thief VR: Legacy of Shadow was shown off, giving us a glimpse of loads of stealth, robbery, and action. It's coming to VR in 2025. Some VR rep at the State of Play!Watch on YouTube Tides of Tomorrow Another new game got a trailer, this time Tides of Tomorrow. A very bright first-person action game set in a flooded dystopian world. It's coming 24th February, 2026 on the PS5. Loving the look of this trailer, give it a watch!Watch on YouTube Astro Bot new update An update to Astro Bot is up next, containing five new challenge levels, new guest bots, and an announcement that the Astro Bot DualSense controller is coming back this year... With a twist! Here's a look at what's coming to Astro Bot!Watch on YouTube Sea of Remnants Pirate time! A reveal trailer for Sea of Remnants was just shown off, giving us a peak of sailing, navigating various islands, and facing the mythical creatures of the deep. It'll be coming in 2026, you can wishlist it now. Grab your hat annd cutlass sailor.Watch on YouTube Sword of the Sea Here comes something beautiful. Sword of the Sea got a new trailer, with plenty of fantastic locations presented in vibrant tones throughout. It's coming on 19th August to the PS5, available on PlayStation Plus. Now this is my kind of vibe, watch it and find out why!Watch on YouTube FBC Firebreak Love co-op? FBC Firebreak is a spin-off to Remedy's Control series, and offers plenty of PvE action for those with a taste for the paranormal. It's coming to PS5 and the PlayStation Game Catalogue on 17th June, 2025. Here's the portion of the State of Play featuring FBC: Firebreak!Watch on YouTube New PS Plus games coming this summer A bunch of new games are coming to PlayStation Plus this Summer in a variety of forms. This includes: The original Deux Ex coming to PlayStation Plus Classics Catalogue on 17th June, 2025. Twisted Metal 3 & 4 coming to PlayStation Plus Classics Catalogue on 15th July, 2025. Resident Evil 2 & 3 coming to PlayStation Plus Classics Catalogue this Summer. Myst and Riven coming later this month as part of Days of Play First Light 007 A massive reveal for the show! James Bond took the stage with First Light 007, our first look at the game. It kicks off with some introductory cinematics, but there are snippets of gameplay showing loads of spy action. It's coming in 2026 to the PS5. It's Bond, James Bond... TrailerWatch on YouTube Ghost of Yotei A short one here. Ghost of Yotei is getting a gameplay deep dive in July. Stay tuned for more info!Watch on YouTube Marvel Tokon Fighting Souls Trying not to freak out over here. Arc System Works just revealed Marvel Tokon Fighting Souls. A 3v3 fighting game featuring plenty of beloved Marvel characters. It's coming to PS5 and PC in 2026. It's 11PM at night and I'm trying not to scream. Mahvel Baby!Watch on YouTube
    Like
    Love
    Wow
    Sad
    Angry
    308
    0 Commenti 0 condivisioni
  • Your Smart Home Got a New CEO and It’s Called the SwitchBot Hub 3

    SwitchBot has a knack for crafting ingenious IoT devices, those little problem-solvers like robotic curtain openers and automated button pressers that add a touch of futuristic convenience. Yet, the true linchpin, the secret sauce that elevates their entire ecosystem, is undoubtedly their Hub. It’s the central nervous system that takes individual smart products and weaves them into a cohesive, intelligent tapestry, turning the abstract concept of a ‘smart home’ into a tangible, daily experience.
    This unification through the Hub is what brings us closer to that almost mythical dream: a home where technology works in concert, where devices understand each other’s capabilities and, critically, anticipate your needs. It’s about creating an environment that doesn’t just react to commands, but proactively adapts, making your living space more intuitive, responsive, and, ultimately, more attuned to you. The new Hub 3 aims to refine this very connection.
    Designer: SwitchBot
    Click Here to Buy Now: The predecessor, the Hub 2, already laid a strong foundation. It brought Matter support into the SwitchBot ecosystem, along with reliable infrared controls, making it a versatile little box. It understood the assignment: bridge the old with the new. The Hub 3 takes that solid base and builds upon it, addressing not just functionality but also the nuanced interactions that make a device truly intuitive and, dare I say, enjoyable to use daily.

    Matter support, the industry’s push for interoperability, remains a cornerstone. The Hub 3 acts as a Matter bridge, capable of bringing up to 30 SwitchBot devices into the Matter fold, allowing them to play nice with platforms like Apple Home. Furthermore, it can send up to 30 distinct commands to other Matter-certified products already integrated into your Apple Home setup, with Home Assistant support on the horizon. This makes it a powerful orchestrator.

    One of the most striking additions is the new rotary dial, something SwitchBot calls its “Dial Master” technology. Giving users an intuitive tactile control that feels very familiar, it makes the Hub 3 even more user-friendly. Imagine adjusting your thermostat not by tapping an arrow repeatedly, but by smoothly turning a dial for that exact ±1°C change. The same applies to volume control or any other granular adjustment. This tactile feedback offers a level of hyper-controlled interaction that screen taps often lack, feeling more connected and satisfying.

    Beyond physical interaction, the Hub 3 gets smarter senses. While the trusty thermo-hygro sensormakes a return for indoor temperature and humidity, it’s now joined by a built-in light sensor. This seemingly small addition unlocks a new layer of intuitive automation. Your home can now react to ambient brightness, perhaps cueing your SwitchBot Curtain 3 to draw open gently as the sun rises, or dimming lights as natural light fades.

    Aesthetically, SwitchBot made a subtle but impactful shift from the Hub 2’s white casing to a sleek black for the Hub 3. This change makes the integrated display stand out significantly, improving readability at a glance. And that display now does more heavy lifting. It still shows essential indoor temperature and humidity, but can also pull in local outdoor weather data, giving you a quick forecast without reaching for your phone. Pair it with a SwitchBot Meter Pro, and it’ll even show CO2 levels.

    The Hub 2 featured two handy customizable buttons. The Hub 3 doubles down, offering four such buttons. This means more of your favorite automation scenes, like “Movie Night,” “Good Morning,” and “Away Mode,” are just a single press away. This reduces friction, making your smart home react faster to your needs without diving into an app for every little thing. It’s these quality-of-life improvements that often make the biggest difference in daily use.

    Crucially, the Hub 3 retains everything that made its predecessor a strong contender. The infrared control capabilities are still robust, supporting over 100,000 IR codes for your legacy AV gear and air conditioners, now with a signal that’s reportedly 150% stronger than the Hub Mini. Its deep integration with the existing SwitchBot ecosystem means your Bots, Curtain movers, and vacuums will feel right at home, working in concert.

    Of course, you still have your choice of control methods. Beyond the new dial and physical buttons, there’s comprehensive app control for setting up complex automations and remote access. Voice control via the usual assistants like Alexa and Google Assistant is present and accounted for, ensuring hands-free operation whenever you need it. This flexibility means the Hub 3 adapts to your preferences, not the other way around.

    The true power, as always, lies in the DIY automation scenes. Imagine your AC, humidifier, and dehumidifier working together, orchestrated by the Hub 3 to maintain your perfect 23°C and 58% humidity. Or picture an energy-saving scene where the built-in motion sensor, coupled with geofencing, detects an empty house and powers down non-essential appliances. It’s these intelligent, personalized routines that transform a collection of smart devices into a truly smart home.

    The SwitchBot Hub 3 feels like the most potent iteration of that “secret sauce” yet. It takes the individual brilliance of SwitchBot’s gadgets and, through enhanced sensory input and more tactile controls, truly deepens that crucial understanding between device, environment, and user. The best part? It plugs right into your smart home’s existing setup, communicating with your slew of IoT devices – even more efficiently if you’ve got a Hub 2 or Hub Mini and you’re looking to upgrade.
    Click Here to Buy Now: The post Your Smart Home Got a New CEO and It’s Called the SwitchBot Hub 3 first appeared on Yanko Design.
    #your #smart #home #got #new
    Your Smart Home Got a New CEO and It’s Called the SwitchBot Hub 3
    SwitchBot has a knack for crafting ingenious IoT devices, those little problem-solvers like robotic curtain openers and automated button pressers that add a touch of futuristic convenience. Yet, the true linchpin, the secret sauce that elevates their entire ecosystem, is undoubtedly their Hub. It’s the central nervous system that takes individual smart products and weaves them into a cohesive, intelligent tapestry, turning the abstract concept of a ‘smart home’ into a tangible, daily experience. This unification through the Hub is what brings us closer to that almost mythical dream: a home where technology works in concert, where devices understand each other’s capabilities and, critically, anticipate your needs. It’s about creating an environment that doesn’t just react to commands, but proactively adapts, making your living space more intuitive, responsive, and, ultimately, more attuned to you. The new Hub 3 aims to refine this very connection. Designer: SwitchBot Click Here to Buy Now: The predecessor, the Hub 2, already laid a strong foundation. It brought Matter support into the SwitchBot ecosystem, along with reliable infrared controls, making it a versatile little box. It understood the assignment: bridge the old with the new. The Hub 3 takes that solid base and builds upon it, addressing not just functionality but also the nuanced interactions that make a device truly intuitive and, dare I say, enjoyable to use daily. Matter support, the industry’s push for interoperability, remains a cornerstone. The Hub 3 acts as a Matter bridge, capable of bringing up to 30 SwitchBot devices into the Matter fold, allowing them to play nice with platforms like Apple Home. Furthermore, it can send up to 30 distinct commands to other Matter-certified products already integrated into your Apple Home setup, with Home Assistant support on the horizon. This makes it a powerful orchestrator. One of the most striking additions is the new rotary dial, something SwitchBot calls its “Dial Master” technology. Giving users an intuitive tactile control that feels very familiar, it makes the Hub 3 even more user-friendly. Imagine adjusting your thermostat not by tapping an arrow repeatedly, but by smoothly turning a dial for that exact ±1°C change. The same applies to volume control or any other granular adjustment. This tactile feedback offers a level of hyper-controlled interaction that screen taps often lack, feeling more connected and satisfying. Beyond physical interaction, the Hub 3 gets smarter senses. While the trusty thermo-hygro sensormakes a return for indoor temperature and humidity, it’s now joined by a built-in light sensor. This seemingly small addition unlocks a new layer of intuitive automation. Your home can now react to ambient brightness, perhaps cueing your SwitchBot Curtain 3 to draw open gently as the sun rises, or dimming lights as natural light fades. Aesthetically, SwitchBot made a subtle but impactful shift from the Hub 2’s white casing to a sleek black for the Hub 3. This change makes the integrated display stand out significantly, improving readability at a glance. And that display now does more heavy lifting. It still shows essential indoor temperature and humidity, but can also pull in local outdoor weather data, giving you a quick forecast without reaching for your phone. Pair it with a SwitchBot Meter Pro, and it’ll even show CO2 levels. The Hub 2 featured two handy customizable buttons. The Hub 3 doubles down, offering four such buttons. This means more of your favorite automation scenes, like “Movie Night,” “Good Morning,” and “Away Mode,” are just a single press away. This reduces friction, making your smart home react faster to your needs without diving into an app for every little thing. It’s these quality-of-life improvements that often make the biggest difference in daily use. Crucially, the Hub 3 retains everything that made its predecessor a strong contender. The infrared control capabilities are still robust, supporting over 100,000 IR codes for your legacy AV gear and air conditioners, now with a signal that’s reportedly 150% stronger than the Hub Mini. Its deep integration with the existing SwitchBot ecosystem means your Bots, Curtain movers, and vacuums will feel right at home, working in concert. Of course, you still have your choice of control methods. Beyond the new dial and physical buttons, there’s comprehensive app control for setting up complex automations and remote access. Voice control via the usual assistants like Alexa and Google Assistant is present and accounted for, ensuring hands-free operation whenever you need it. This flexibility means the Hub 3 adapts to your preferences, not the other way around. The true power, as always, lies in the DIY automation scenes. Imagine your AC, humidifier, and dehumidifier working together, orchestrated by the Hub 3 to maintain your perfect 23°C and 58% humidity. Or picture an energy-saving scene where the built-in motion sensor, coupled with geofencing, detects an empty house and powers down non-essential appliances. It’s these intelligent, personalized routines that transform a collection of smart devices into a truly smart home. The SwitchBot Hub 3 feels like the most potent iteration of that “secret sauce” yet. It takes the individual brilliance of SwitchBot’s gadgets and, through enhanced sensory input and more tactile controls, truly deepens that crucial understanding between device, environment, and user. The best part? It plugs right into your smart home’s existing setup, communicating with your slew of IoT devices – even more efficiently if you’ve got a Hub 2 or Hub Mini and you’re looking to upgrade. Click Here to Buy Now: The post Your Smart Home Got a New CEO and It’s Called the SwitchBot Hub 3 first appeared on Yanko Design. #your #smart #home #got #new
    WWW.YANKODESIGN.COM
    Your Smart Home Got a New CEO and It’s Called the SwitchBot Hub 3
    SwitchBot has a knack for crafting ingenious IoT devices, those little problem-solvers like robotic curtain openers and automated button pressers that add a touch of futuristic convenience. Yet, the true linchpin, the secret sauce that elevates their entire ecosystem, is undoubtedly their Hub. It’s the central nervous system that takes individual smart products and weaves them into a cohesive, intelligent tapestry, turning the abstract concept of a ‘smart home’ into a tangible, daily experience. This unification through the Hub is what brings us closer to that almost mythical dream: a home where technology works in concert, where devices understand each other’s capabilities and, critically, anticipate your needs. It’s about creating an environment that doesn’t just react to commands, but proactively adapts, making your living space more intuitive, responsive, and, ultimately, more attuned to you. The new Hub 3 aims to refine this very connection. Designer: SwitchBot Click Here to Buy Now: $119.99 The predecessor, the Hub 2, already laid a strong foundation. It brought Matter support into the SwitchBot ecosystem, along with reliable infrared controls, making it a versatile little box. It understood the assignment: bridge the old with the new. The Hub 3 takes that solid base and builds upon it, addressing not just functionality but also the nuanced interactions that make a device truly intuitive and, dare I say, enjoyable to use daily. Matter support, the industry’s push for interoperability, remains a cornerstone. The Hub 3 acts as a Matter bridge, capable of bringing up to 30 SwitchBot devices into the Matter fold, allowing them to play nice with platforms like Apple Home. Furthermore, it can send up to 30 distinct commands to other Matter-certified products already integrated into your Apple Home setup, with Home Assistant support on the horizon. This makes it a powerful orchestrator. One of the most striking additions is the new rotary dial, something SwitchBot calls its “Dial Master” technology. Giving users an intuitive tactile control that feels very familiar (think ovens, radios, car ACs), it makes the Hub 3 even more user-friendly. Imagine adjusting your thermostat not by tapping an arrow repeatedly, but by smoothly turning a dial for that exact ±1°C change. The same applies to volume control or any other granular adjustment. This tactile feedback offers a level of hyper-controlled interaction that screen taps often lack, feeling more connected and satisfying. Beyond physical interaction, the Hub 3 gets smarter senses. While the trusty thermo-hygro sensor (cleverly integrated into its cable) makes a return for indoor temperature and humidity, it’s now joined by a built-in light sensor. This seemingly small addition unlocks a new layer of intuitive automation. Your home can now react to ambient brightness, perhaps cueing your SwitchBot Curtain 3 to draw open gently as the sun rises, or dimming lights as natural light fades. Aesthetically, SwitchBot made a subtle but impactful shift from the Hub 2’s white casing to a sleek black for the Hub 3. This change makes the integrated display stand out significantly, improving readability at a glance. And that display now does more heavy lifting. It still shows essential indoor temperature and humidity, but can also pull in local outdoor weather data, giving you a quick forecast without reaching for your phone. Pair it with a SwitchBot Meter Pro, and it’ll even show CO2 levels. The Hub 2 featured two handy customizable buttons. The Hub 3 doubles down, offering four such buttons. This means more of your favorite automation scenes, like “Movie Night,” “Good Morning,” and “Away Mode,” are just a single press away. This reduces friction, making your smart home react faster to your needs without diving into an app for every little thing. It’s these quality-of-life improvements that often make the biggest difference in daily use. Crucially, the Hub 3 retains everything that made its predecessor a strong contender. The infrared control capabilities are still robust, supporting over 100,000 IR codes for your legacy AV gear and air conditioners, now with a signal that’s reportedly 150% stronger than the Hub Mini. Its deep integration with the existing SwitchBot ecosystem means your Bots, Curtain movers, and vacuums will feel right at home, working in concert. Of course, you still have your choice of control methods. Beyond the new dial and physical buttons, there’s comprehensive app control for setting up complex automations and remote access. Voice control via the usual assistants like Alexa and Google Assistant is present and accounted for, ensuring hands-free operation whenever you need it. This flexibility means the Hub 3 adapts to your preferences, not the other way around. The true power, as always, lies in the DIY automation scenes. Imagine your AC, humidifier, and dehumidifier working together, orchestrated by the Hub 3 to maintain your perfect 23°C and 58% humidity. Or picture an energy-saving scene where the built-in motion sensor, coupled with geofencing, detects an empty house and powers down non-essential appliances. It’s these intelligent, personalized routines that transform a collection of smart devices into a truly smart home. The SwitchBot Hub 3 feels like the most potent iteration of that “secret sauce” yet. It takes the individual brilliance of SwitchBot’s gadgets and, through enhanced sensory input and more tactile controls, truly deepens that crucial understanding between device, environment, and user. The best part? It plugs right into your smart home’s existing setup, communicating with your slew of IoT devices – even more efficiently if you’ve got a Hub 2 or Hub Mini and you’re looking to upgrade. Click Here to Buy Now: $119.99The post Your Smart Home Got a New CEO and It’s Called the SwitchBot Hub 3 first appeared on Yanko Design.
    0 Commenti 0 condivisioni
  • Best Legendary & Mythical Pokemon Plushies You Can Actually Buy

    Legendary Pokemon used to feel… untouchable. These were the creatures that could reshape continents, rewrite time, or send you into a nightmare loop for eternity. You’d spend hours chasing them down in-game, holding your breath every time the Pokeball shook. They were mythic. They were serious. Now? They’re plushies. Soft, squishy, slightly ridiculous plushies.
    #best #legendary #ampamp #mythical #pokemon
    Best Legendary & Mythical Pokemon Plushies You Can Actually Buy
    Legendary Pokemon used to feel… untouchable. These were the creatures that could reshape continents, rewrite time, or send you into a nightmare loop for eternity. You’d spend hours chasing them down in-game, holding your breath every time the Pokeball shook. They were mythic. They were serious. Now? They’re plushies. Soft, squishy, slightly ridiculous plushies. #best #legendary #ampamp #mythical #pokemon
    GAMERANT.COM
    Best Legendary & Mythical Pokemon Plushies You Can Actually Buy
    Legendary Pokemon used to feel… untouchable. These were the creatures that could reshape continents, rewrite time, or send you into a nightmare loop for eternity. You’d spend hours chasing them down in-game, holding your breath every time the Pokeball shook. They were mythic. They were serious. Now? They’re plushies. Soft, squishy, slightly ridiculous plushies.
    0 Commenti 0 condivisioni