• In the shadows of every mission, I found solace, yet the moment I stopped playing Hitman, I was left with a hollow ache. The thrill of stealth and the intricacies of each assassination became my escape, but reality crashed in like a cold wave. Alone in a city that never sleeps, I felt the weight of my isolation, mirrored in Agent 47’s silent presence. Hours spent in a world of calculated precision were bliss, but they couldn’t fill the void in my heart. Sometimes, the virtual can feel more real than the tangible loneliness surrounding me.

    #Hitman #StealthGame #Loneliness #GamingLife #EmotionalEscape
    In the shadows of every mission, I found solace, yet the moment I stopped playing Hitman, I was left with a hollow ache. The thrill of stealth and the intricacies of each assassination became my escape, but reality crashed in like a cold wave. Alone in a city that never sleeps, I felt the weight of my isolation, mirrored in Agent 47’s silent presence. Hours spent in a world of calculated precision were bliss, but they couldn’t fill the void in my heart. Sometimes, the virtual can feel more real than the tangible loneliness surrounding me. #Hitman #StealthGame #Loneliness #GamingLife #EmotionalEscape
    When I Stopped Playing Mission Stories In Hitman, I Discovered What A Great Stealth Game It Is
    kotaku.com
    I spent about 35 hours in Hitman World of Assassination this past weekend—interrupted mostly by the non-optional need to sleep and occasional concerns over the health of my GPU running for so long in a non-air conditioned apartment room in New York C
    Like
    Love
    Wow
    Sad
    Angry
    166
    · 1 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Have you ever stopped to think about the NASA logo? This incredible symbol, designed in 1958, represents not just a space agency, but a dream that transcends boundaries! The iconic design captures the spirit of exploration, curiosity, and innovation. It reminds us that the sky is not the limit—it's just the beginning!

    Every time we look at the NASA logo, we're inspired to reach higher, challenge ourselves, and embrace the unknown. Let's carry that enthusiasm into our everyday lives! Remember, just like NASA, we too can break barriers and reach for the stars!

    #NASALogo #SpaceExploration #Inspiration #DreamBig #ReachForTheStars
    🌟 Have you ever stopped to think about the NASA logo? This incredible symbol, designed in 1958, represents not just a space agency, but a dream that transcends boundaries! 🚀✨ The iconic design captures the spirit of exploration, curiosity, and innovation. It reminds us that the sky is not the limit—it's just the beginning! 🌌💫 Every time we look at the NASA logo, we're inspired to reach higher, challenge ourselves, and embrace the unknown. Let's carry that enthusiasm into our everyday lives! Remember, just like NASA, we too can break barriers and reach for the stars! 🌠💖 #NASALogo #SpaceExploration #Inspiration #DreamBig #ReachForTheStars
    El logo de la NASA: quién lo diseñó, qué significa y por qué es un icono universal
    graffica.info
    Vinculada con el espacio desde 1958, la NASA ha forjado una potente imagen cuyo icono ha traspasado todo tipo de fronteras, llegando incluso al espacio, pero ¿quién diseñó el logo y qué representa?
    Like
    Wow
    Love
    Sad
    Angry
    64
    · 1 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Would you switch browsers for a chatbot?

    Hi, friends! Welcome to Installer No. 87, your guide to the best and Verge-iest stuff in the world.This week, I’ve been reading about Sabrina Carpenter and Khaby Lame and intimacy coordinators, finally making a dent in Barbarians at the Gate, watching all the Ben Schwartz and Friends I can find on YouTube, planning my days with the new Finalist beta, recklessly installing all the Apple developer betas after WWDC, thoroughly enjoying Dakota Johnson’s current press tour, and trying to clear all my inboxes before I go on parental leave. It’s… going.I also have for you a much-awaited new browser, a surprise update to a great photo editor, a neat trailer for a meh-looking movie, a classic Steve Jobs speech, and much more. Slightly shorter issue this week, sorry; there’s just a lot going on, but I didn’t want to leave y’all hanging entirely. Oh, and: we’ll be off next week, for Juneteenth, vacation, and general summer chaos reasons. We’ll be back in full force after that, though! Let’s get into it.The DropDia. I know there are a lot of Arc fans here in the Installerverse, and I know you, like me, will have a lot of feelings about the company’s new and extremely AI-focused browser. Personally, I don’t see leaving Arc anytime soon, but there are some really fascinating ideasin Dia already. Snapseed 3.0. I completely forgot Snapseed even existed, and now here’s a really nice update with a bunch of new editing tools and a nice new redesign! As straightforward photo editors go, this is one of the better ones. The new version is only on iOS right now, but I assume it’s heading to Android shortly.“I Tried To Make Something In America.” I was first turned onto the story of the Smarter Scrubber by a great Search Engine episode, and this is a great companion to the story about what it really takes to bring manufacturing back to the US. And why it’s hard to justify.. That link, and the trailer, will only do anything for you if you have a newer iPhone. But even if you don’t care about the movie, the trailer — which actually buzzes in sync with the car’s rumbles and revs — is just really, really cool. Android 16. You can’t get the cool, colorful new look just yet or the desktop mode I am extremely excited about — there’s a lot of good stuff in Android 16 but most of it is coming later. Still, Live Updates look good, and there’s some helpful accessibility stuff, as well.The Infinite Machine Olto. I am such a sucker for any kind of futuristic-looking electric scooter, and this one really hits the sweet spot. Part moped, part e-bike, all Blade Runner vibes. If it wasn’t then I would’ve probably ordered one already.The Fujifilm X-E5. I kept wondering why Fujifilm didn’t just make, like, a hundred different great-looking cameras at every imaginable price because everyone wants a camera this cool. Well, here we are! It’s a spin on the X100VI but with interchangeable lenses and a few power-user features. All my photographer friends are going to want this.Call Her Alex. I confess I’m no Call Her Daddy diehard, but I found this two-part doc on Alex Cooper really interesting. Cooper’s story is all about understanding people, the internet, and what it means to feel connected now. It’s all very low-stakes and somehow also existential? It’s only two parts, you should watch it.“Steve Jobs - 2005 Stanford Commencement Address.” For the 20th anniversary of Jobs’ famousspeech, the Steve Jobs Archive put together a big package of stories, notes, and other materials around the speech. Plus, a newly high-def version of the video. This one’s always worth the 15 minutes.Dune: Awakening. Dune has ascended to the rare territory of “I will check out anything from this franchise, ever, no questions asked.” This game is big on open-world survival and ornithopters, too, so it’s even more my kind of thing. And it’s apparently punishingly difficult in spots.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“I had tried the paper planner in the leather Paper Republic journal but since have moved onto the Remarkable Paper Pro color e-ink device which takes everything you like about paper but makes it editable and color coded. Combine this with a Remarkable planner in PDF format off of Etsy and you are golden.” — Jason“I started reading a manga series from content creator Cory Kenshin called Monsters We Make. So far, I love it. Already preordered Vol. 2.” — Rob“I recently went down the third party controller rabbit hole after my trusty adapted Xbox One controller finally kicked the bucket, and I wanted something I could use across my PC, phone, handheld, Switch, etc. I’ve been playing with the GameSir Cyclone 2 for a few weeks, and it feels really deluxe. The thumbsticks are impossibly smooth and accurate thanks to its TMR joysticks. The face buttons took a second for my brain to adjust to; the short travel distance initially registered as mushy, but once I stopped trying to pound the buttons like I was at the arcade, I found the subtle mechanical click super satisfying.” — Sam“The Apple TV Plus miniseries Long Way Home. It’s Ewan McGregor and Charley Boorman’s fourth Long Way series. This time they are touring some European countries on vintage bikes that they fixed, and it’s such a light-hearted show from two really down to earth humans. Connecting with other people in different cultures and seeing their journey is such a treat!” — Esmael“Podcast recommendation: Devil and the Deep Blue Sea by Christianity Today. A deep dive into the Satanic Panic of the 80’s and 90’s.” — Drew“Splatoon 3and the new How to Train Your Dragon.” — Aaron“I can’t put Mario Kart World down. When I get tired of the intense Knockout Tour mode I go to Free Roam and try to knock out P-Switch challenges, some of which are really tough! I’m obsessed.” — Dave“Fable, a cool app for finding books with virtual book clubs. It’s the closest to a more cozy online bookstore with more honest reviews. I just wish you could click on the author’s name to see their other books.” — Astrid“This is the Summer Games Fest weekand there are a TON of game demos to try out on Steam. One that has caught my attention / play time the most is Wildgate. It’s a team based spaceship shooter where ship crews battle and try to escape with a powerful artifact.” — Sean“Battlefront 2 is back for some reason. Still looks great.” — IanSigning offI have long been fascinated by weather forecasting. I recommend Andrew Blum’s book, The Weather Machine, to people all the time, as a way to understand both how we learned to predict the weather and why it’s a literally culture-changing thing to be able to do so. And if you want to make yourself so, so angry, there’s a whole chunk of Michael Lewis’s book, The Fifth Risk, about how a bunch of companies managed to basically privatize forecasts… based on government data. The weather is a huge business, an extremely powerful political force, and even more important to our way of life than we realize. And we’re really good at predicting the weather!I’ve also been hearing for years that weather forecasting is a perfect use for AI. It’s all about vast quantities of historical data, tiny fluctuations in readings, and finding patterns that often don’t want to be found. So, of course, as soon as I read my colleague Justine Calma’s story about a new Google project called Weather Lab, I spent the next hour poking through the data to see how well DeepMind managed to predict and track recent storms. It’s deeply wonky stuff, but it’s cool to see Big Tech trying to figure out Mother Nature — and almost getting it right. Almost.See you next week!See More:
    #would #you #switch #browsers #chatbot
    Would you switch browsers for a chatbot?
    Hi, friends! Welcome to Installer No. 87, your guide to the best and Verge-iest stuff in the world.This week, I’ve been reading about Sabrina Carpenter and Khaby Lame and intimacy coordinators, finally making a dent in Barbarians at the Gate, watching all the Ben Schwartz and Friends I can find on YouTube, planning my days with the new Finalist beta, recklessly installing all the Apple developer betas after WWDC, thoroughly enjoying Dakota Johnson’s current press tour, and trying to clear all my inboxes before I go on parental leave. It’s… going.I also have for you a much-awaited new browser, a surprise update to a great photo editor, a neat trailer for a meh-looking movie, a classic Steve Jobs speech, and much more. Slightly shorter issue this week, sorry; there’s just a lot going on, but I didn’t want to leave y’all hanging entirely. Oh, and: we’ll be off next week, for Juneteenth, vacation, and general summer chaos reasons. We’ll be back in full force after that, though! Let’s get into it.The DropDia. I know there are a lot of Arc fans here in the Installerverse, and I know you, like me, will have a lot of feelings about the company’s new and extremely AI-focused browser. Personally, I don’t see leaving Arc anytime soon, but there are some really fascinating ideasin Dia already. Snapseed 3.0. I completely forgot Snapseed even existed, and now here’s a really nice update with a bunch of new editing tools and a nice new redesign! As straightforward photo editors go, this is one of the better ones. The new version is only on iOS right now, but I assume it’s heading to Android shortly.“I Tried To Make Something In America.” I was first turned onto the story of the Smarter Scrubber by a great Search Engine episode, and this is a great companion to the story about what it really takes to bring manufacturing back to the US. And why it’s hard to justify.. That link, and the trailer, will only do anything for you if you have a newer iPhone. But even if you don’t care about the movie, the trailer — which actually buzzes in sync with the car’s rumbles and revs — is just really, really cool. Android 16. You can’t get the cool, colorful new look just yet or the desktop mode I am extremely excited about — there’s a lot of good stuff in Android 16 but most of it is coming later. Still, Live Updates look good, and there’s some helpful accessibility stuff, as well.The Infinite Machine Olto. I am such a sucker for any kind of futuristic-looking electric scooter, and this one really hits the sweet spot. Part moped, part e-bike, all Blade Runner vibes. If it wasn’t then I would’ve probably ordered one already.The Fujifilm X-E5. I kept wondering why Fujifilm didn’t just make, like, a hundred different great-looking cameras at every imaginable price because everyone wants a camera this cool. Well, here we are! It’s a spin on the X100VI but with interchangeable lenses and a few power-user features. All my photographer friends are going to want this.Call Her Alex. I confess I’m no Call Her Daddy diehard, but I found this two-part doc on Alex Cooper really interesting. Cooper’s story is all about understanding people, the internet, and what it means to feel connected now. It’s all very low-stakes and somehow also existential? It’s only two parts, you should watch it.“Steve Jobs - 2005 Stanford Commencement Address.” For the 20th anniversary of Jobs’ famousspeech, the Steve Jobs Archive put together a big package of stories, notes, and other materials around the speech. Plus, a newly high-def version of the video. This one’s always worth the 15 minutes.Dune: Awakening. Dune has ascended to the rare territory of “I will check out anything from this franchise, ever, no questions asked.” This game is big on open-world survival and ornithopters, too, so it’s even more my kind of thing. And it’s apparently punishingly difficult in spots.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“I had tried the paper planner in the leather Paper Republic journal but since have moved onto the Remarkable Paper Pro color e-ink device which takes everything you like about paper but makes it editable and color coded. Combine this with a Remarkable planner in PDF format off of Etsy and you are golden.” — Jason“I started reading a manga series from content creator Cory Kenshin called Monsters We Make. So far, I love it. Already preordered Vol. 2.” — Rob“I recently went down the third party controller rabbit hole after my trusty adapted Xbox One controller finally kicked the bucket, and I wanted something I could use across my PC, phone, handheld, Switch, etc. I’ve been playing with the GameSir Cyclone 2 for a few weeks, and it feels really deluxe. The thumbsticks are impossibly smooth and accurate thanks to its TMR joysticks. The face buttons took a second for my brain to adjust to; the short travel distance initially registered as mushy, but once I stopped trying to pound the buttons like I was at the arcade, I found the subtle mechanical click super satisfying.” — Sam“The Apple TV Plus miniseries Long Way Home. It’s Ewan McGregor and Charley Boorman’s fourth Long Way series. This time they are touring some European countries on vintage bikes that they fixed, and it’s such a light-hearted show from two really down to earth humans. Connecting with other people in different cultures and seeing their journey is such a treat!” — Esmael“Podcast recommendation: Devil and the Deep Blue Sea by Christianity Today. A deep dive into the Satanic Panic of the 80’s and 90’s.” — Drew“Splatoon 3and the new How to Train Your Dragon.” — Aaron“I can’t put Mario Kart World down. When I get tired of the intense Knockout Tour mode I go to Free Roam and try to knock out P-Switch challenges, some of which are really tough! I’m obsessed.” — Dave“Fable, a cool app for finding books with virtual book clubs. It’s the closest to a more cozy online bookstore with more honest reviews. I just wish you could click on the author’s name to see their other books.” — Astrid“This is the Summer Games Fest weekand there are a TON of game demos to try out on Steam. One that has caught my attention / play time the most is Wildgate. It’s a team based spaceship shooter where ship crews battle and try to escape with a powerful artifact.” — Sean“Battlefront 2 is back for some reason. Still looks great.” — IanSigning offI have long been fascinated by weather forecasting. I recommend Andrew Blum’s book, The Weather Machine, to people all the time, as a way to understand both how we learned to predict the weather and why it’s a literally culture-changing thing to be able to do so. And if you want to make yourself so, so angry, there’s a whole chunk of Michael Lewis’s book, The Fifth Risk, about how a bunch of companies managed to basically privatize forecasts… based on government data. The weather is a huge business, an extremely powerful political force, and even more important to our way of life than we realize. And we’re really good at predicting the weather!I’ve also been hearing for years that weather forecasting is a perfect use for AI. It’s all about vast quantities of historical data, tiny fluctuations in readings, and finding patterns that often don’t want to be found. So, of course, as soon as I read my colleague Justine Calma’s story about a new Google project called Weather Lab, I spent the next hour poking through the data to see how well DeepMind managed to predict and track recent storms. It’s deeply wonky stuff, but it’s cool to see Big Tech trying to figure out Mother Nature — and almost getting it right. Almost.See you next week!See More: #would #you #switch #browsers #chatbot
    Would you switch browsers for a chatbot?
    www.theverge.com
    Hi, friends! Welcome to Installer No. 87, your guide to the best and Verge-iest stuff in the world. (If you’re new here, welcome, happy It’s Officially Too Hot Now Week, and also you can read all the old editions at the Installer homepage.) This week, I’ve been reading about Sabrina Carpenter and Khaby Lame and intimacy coordinators, finally making a dent in Barbarians at the Gate, watching all the Ben Schwartz and Friends I can find on YouTube, planning my days with the new Finalist beta, recklessly installing all the Apple developer betas after WWDC, thoroughly enjoying Dakota Johnson’s current press tour, and trying to clear all my inboxes before I go on parental leave. It’s… going.I also have for you a much-awaited new browser, a surprise update to a great photo editor, a neat trailer for a meh-looking movie, a classic Steve Jobs speech, and much more. Slightly shorter issue this week, sorry; there’s just a lot going on, but I didn’t want to leave y’all hanging entirely. Oh, and: we’ll be off next week, for Juneteenth, vacation, and general summer chaos reasons. We’ll be back in full force after that, though! Let’s get into it.(As always, the best part of Installer is your ideas and tips. What do you want to know more about? What awesome tricks do you know that everyone else should? What app should everyone be using? Tell me everything: installer@theverge.com. And if you know someone else who might enjoy Installer, forward it to them and tell them to subscribe here.)The DropDia. I know there are a lot of Arc fans here in the Installerverse, and I know you, like me, will have a lot of feelings about the company’s new and extremely AI-focused browser. Personally, I don’t see leaving Arc anytime soon, but there are some really fascinating ideas (and nice design touches) in Dia already. Snapseed 3.0. I completely forgot Snapseed even existed, and now here’s a really nice update with a bunch of new editing tools and a nice new redesign! As straightforward photo editors go, this is one of the better ones. The new version is only on iOS right now, but I assume it’s heading to Android shortly.“I Tried To Make Something In America.” I was first turned onto the story of the Smarter Scrubber by a great Search Engine episode, and this is a great companion to the story about what it really takes to bring manufacturing back to the US. And why it’s hard to justify.. That link, and the trailer, will only do anything for you if you have a newer iPhone. But even if you don’t care about the movie, the trailer — which actually buzzes in sync with the car’s rumbles and revs — is just really, really cool. Android 16. You can’t get the cool, colorful new look just yet or the desktop mode I am extremely excited about — there’s a lot of good stuff in Android 16 but most of it is coming later. Still, Live Updates look good, and there’s some helpful accessibility stuff, as well.The Infinite Machine Olto. I am such a sucker for any kind of futuristic-looking electric scooter, and this one really hits the sweet spot. Part moped, part e-bike, all Blade Runner vibes. If it wasn’t $3,500, then I would’ve probably ordered one already.The Fujifilm X-E5. I kept wondering why Fujifilm didn’t just make, like, a hundred different great-looking cameras at every imaginable price because everyone wants a camera this cool. Well, here we are! It’s a spin on the X100VI but with interchangeable lenses and a few power-user features. All my photographer friends are going to want this.Call Her Alex. I confess I’m no Call Her Daddy diehard, but I found this two-part doc on Alex Cooper really interesting. Cooper’s story is all about understanding people, the internet, and what it means to feel connected now. It’s all very low-stakes and somehow also existential? It’s only two parts, you should watch it.“Steve Jobs - 2005 Stanford Commencement Address.” For the 20th anniversary of Jobs’ famous (and genuinely fabulous) speech, the Steve Jobs Archive put together a big package of stories, notes, and other materials around the speech. Plus, a newly high-def version of the video. This one’s always worth the 15 minutes.Dune: Awakening. Dune has ascended to the rare territory of “I will check out anything from this franchise, ever, no questions asked.” This game is big on open-world survival and ornithopters, too, so it’s even more my kind of thing. And it’s apparently punishingly difficult in spots.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“I had tried the paper planner in the leather Paper Republic journal but since have moved onto the Remarkable Paper Pro color e-ink device which takes everything you like about paper but makes it editable and color coded. Combine this with a Remarkable planner in PDF format off of Etsy and you are golden.” — Jason“I started reading a manga series from content creator Cory Kenshin called Monsters We Make. So far, I love it. Already preordered Vol. 2.” — Rob“I recently went down the third party controller rabbit hole after my trusty adapted Xbox One controller finally kicked the bucket, and I wanted something I could use across my PC, phone, handheld, Switch, etc. I’ve been playing with the GameSir Cyclone 2 for a few weeks, and it feels really deluxe. The thumbsticks are impossibly smooth and accurate thanks to its TMR joysticks. The face buttons took a second for my brain to adjust to; the short travel distance initially registered as mushy, but once I stopped trying to pound the buttons like I was at the arcade, I found the subtle mechanical click super satisfying.” — Sam“The Apple TV Plus miniseries Long Way Home. It’s Ewan McGregor and Charley Boorman’s fourth Long Way series. This time they are touring some European countries on vintage bikes that they fixed, and it’s such a light-hearted show from two really down to earth humans. Connecting with other people in different cultures and seeing their journey is such a treat!” — Esmael“Podcast recommendation: Devil and the Deep Blue Sea by Christianity Today. A deep dive into the Satanic Panic of the 80’s and 90’s.” — Drew“Splatoon 3 (the free Switch 2 update) and the new How to Train Your Dragon.” — Aaron“I can’t put Mario Kart World down. When I get tired of the intense Knockout Tour mode I go to Free Roam and try to knock out P-Switch challenges, some of which are really tough! I’m obsessed.” — Dave“Fable, a cool app for finding books with virtual book clubs. It’s the closest to a more cozy online bookstore with more honest reviews. I just wish you could click on the author’s name to see their other books.” — Astrid“This is the Summer Games Fest week (formerly E3, RIP) and there are a TON of game demos to try out on Steam. One that has caught my attention / play time the most is Wildgate. It’s a team based spaceship shooter where ship crews battle and try to escape with a powerful artifact.” — Sean“Battlefront 2 is back for some reason. Still looks great.” — IanSigning offI have long been fascinated by weather forecasting. I recommend Andrew Blum’s book, The Weather Machine, to people all the time, as a way to understand both how we learned to predict the weather and why it’s a literally culture-changing thing to be able to do so. And if you want to make yourself so, so angry, there’s a whole chunk of Michael Lewis’s book, The Fifth Risk, about how a bunch of companies managed to basically privatize forecasts… based on government data. The weather is a huge business, an extremely powerful political force, and even more important to our way of life than we realize. And we’re really good at predicting the weather!I’ve also been hearing for years that weather forecasting is a perfect use for AI. It’s all about vast quantities of historical data, tiny fluctuations in readings, and finding patterns that often don’t want to be found. So, of course, as soon as I read my colleague Justine Calma’s story about a new Google project called Weather Lab, I spent the next hour poking through the data to see how well DeepMind managed to predict and track recent storms. It’s deeply wonky stuff, but it’s cool to see Big Tech trying to figure out Mother Nature — and almost getting it right. Almost.See you next week!See More:
    Like
    Love
    Wow
    Angry
    Sad
    525
    · 0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Patch Notes #9: Xbox debuts its first handhelds, Hong Kong authorities ban a video game, and big hopes for Big Walk

    We did it gang. We completed another week in the impossible survival sim that is real life. Give yourself a appreciative pat on the back and gaze wistfully towards whatever adventures or blissful respite the weekend might bring.This week I've mostly been recovering from my birthday celebrations, which entailed a bountiful Korean Barbecue that left me with a rampant case of the meat sweats and a pub crawl around one of Manchester's finest suburbs. There was no time for video games, but that's not always a bad thing. Distance makes the heart grow fonder, after all.I was welcomed back to the imaginary office with a news bludgeon to the face. The headlines this week have come thick and fast, bringing hardware announcements, more layoffs, and some notable sales milestones. As always, there's a lot to digest, so let's venture once more into the fray. The first Xbox handhelds have finally arrivedvia Game Developer // Microsoft finally stopped flirting with the idea of launching a handheld this week and unveiled not one, but two devices called the ROG Xbox Ally and ROG Xbox Ally X. The former is pitched towards casual players, while the latter aims to entice hardcore video game aficionados. Both devices were designed in collaboration with Asus and will presumably retail at price points that reflect their respective innards. We don't actually know yet, mind, because Microsoft didn't actually state how much they'll cost. You have the feel that's where the company really needs to stick the landing here.Related:Switch 2 tops 3.5 million sales to deliver Nintendo's biggest console launchvia Game Developer // Four days. That's all it took for the Switch 2 to shift over 3.5 million units worldwide to deliver Nintendo's biggest console launch ever. The original Switch needed a month to reach 2.74 million sales by contrast, while the PS5 needed two months to sell 4.5 million units worldwide. Xbox sales remain a mystery because Microsoft just doesn't talk about that sort of thing anymore, which is decidedly frustrating for those oddballswho actually enjoy sifting through financial documents in search of those juicy juicy numbers.Inside the ‘Dragon Age’ Debacle That Gutted EA’s BioWare Studiovia Bloomberg// How do you kill a franchise like Dragon Age and leave a studio with the pedigree of BioWare in turmoil? According to a new report from Bloomberg, the answer will likely resonate with developers across the industry: corporate meddling. Sources speaking to the publication explained how Dragon Age: The Veilguard, which failed to meet the expectations of parent company EA, was in constant disarray because the American publisher couldn't decide whether it should be a live-service or single player title. Indecision from leadership within EA and an eventual pivot away from the live-service model only caused more confusion, with BioWare being told to implement foundational changes within impossible timelines. It's a story that's all the more alarming because of how familiar it feels.Related:Sony is making layoffs at Days Gone developer Bend Studiovia Game Developer // Sony has continued its Tony Award-winning tun as the Grim Reaper by cutting even more jobs within PlayStation Studios. Days Gone developer Bend Studio was the latest casualty, with the first-party developer confirming a number of employees were laid off just months after the cancellation of a live-service project. Sony didn't confirm how many people lost their jobs, but Bloomberg reporter Jason Schreier heard that around 40 peoplewere let go. Embracer CEO Lars Wingefors to become executive chair and focus on M&Avia Game Developer // Somewhere, in a deep dark corner of the world, the monkey's paw has curled. Embracer CEO Lars Wingefors, who demonstrated his leadership nous by spending years embarking on a colossal merger and acquisition spree only to immediately start downsizing, has announced he'll be stepping down as CEO. The catch? Wingefors is currently proposed to be appointed executive chair of the board of Embracer. In his new role, he'll apparently focus on strategic initiatives, capital allocation, and mergers and acquisitions. And people wonder why satire is dead. Related:Hong Kong Outlaws a Video Game, Saying It Promotes 'Armed Revolution'via The New York Times// National security police in Hong Kong have banned a Taiwanese video game called Reversed Front: Bonfire for supposedly "advocating armed revolution." Authorities in the region warned that anybody who downloads or recommends the online strategy title will face serious legal charges. The game has been pulled from Apple's marketplace in Hong Kong but is still available for download elsewhere. It was never available in mainland China. Developer ESC Taiwan, part of an group of volunteers who are vocal detractors of China's Communist Party, thanked Hong Kong authorities for the free publicity in a social media post and said the ban shows how political censorship remains prominent in the territory. RuneScape developer accused of ‘catering to American conservatism’ by rolling back Pride Month eventsvia PinkNews // Runescape developers inside Jagex have reportedly been left reeling after the studio decided to pivot away from Pride Month content to focus more on "what players wanted." Jagex CEO broke the news to staff with a post on an internal message board, prompting a rush of complaints—with many workers explaining the content was either already complete or easy to implement. Though Jagex is based in the UK, it's parent company CVC Capital Partners operates multiple companies in the United States. It's a situation that left one employee who spoke to PinkNews questioning whether the studio has caved to "American conservatism." SAG-AFTRA suspends strike and instructs union members to return to workvia Game Developer // It has taken almost a year, but performer union SAG-AFTRA has finally suspended strike action and instructed members to return to work. The decision comes after protracted negotiations with major studios who employ performers under the Interactive Media Agreement. SAG-AFTRA had been striking to secure better working conditions and AI protections for its members, and feels it has now secured a deal that will install vital "AI guardrails."A Switch 2 exclusive Splatoon spinoff was just shadow-announced on Nintendo Todayvia Game Developer // Nintendo did something peculiar this week when it unveiled a Splatoon spinoff out of the blue. That in itself might not sound too strange, but for a short window the announcement was only accessible via the company's new Nintendo Today mobile app. It's a situation that left people without access to the app questioning whether the news was even real. Nintendo Today prevented users from capturing screenshots or footage, only adding to the sense of confusion. It led to this reporter branding the move a "shadow announcement," which in turn left some of our readers perplexed. Can you ever announce and announcement? What does that term even mean? Food for thought. A wonderful new Big Walk trailer melted this reporter's heartvia House House//  The mad lads behind Untitled Goose Game are back with a new jaunt called Big Walk. This one has been on my radar for a while, but the studio finally debuted a gameplay overview during Summer Game Fest and it looks extraordinary in its purity. It's about walking and talking—and therein lies the charm. Players are forced to cooperate to navigate a lush open world, solve puzzles, and embark upon hijinks. Proximity-based communication is the core mechanic in Big Walk—whether that takes the form of voice chat, written text, hand signals, blazing flares, or pictograms—and it looks like it'll lead to all sorts of weird and wonderful antics. It's a pitch that cuts through because it's so unashamedly different, and there's a lot to love about that. I'm looking forward to this one.
    #patch #notes #xbox #debuts #its
    Patch Notes #9: Xbox debuts its first handhelds, Hong Kong authorities ban a video game, and big hopes for Big Walk
    We did it gang. We completed another week in the impossible survival sim that is real life. Give yourself a appreciative pat on the back and gaze wistfully towards whatever adventures or blissful respite the weekend might bring.This week I've mostly been recovering from my birthday celebrations, which entailed a bountiful Korean Barbecue that left me with a rampant case of the meat sweats and a pub crawl around one of Manchester's finest suburbs. There was no time for video games, but that's not always a bad thing. Distance makes the heart grow fonder, after all.I was welcomed back to the imaginary office with a news bludgeon to the face. The headlines this week have come thick and fast, bringing hardware announcements, more layoffs, and some notable sales milestones. As always, there's a lot to digest, so let's venture once more into the fray. The first Xbox handhelds have finally arrivedvia Game Developer // Microsoft finally stopped flirting with the idea of launching a handheld this week and unveiled not one, but two devices called the ROG Xbox Ally and ROG Xbox Ally X. The former is pitched towards casual players, while the latter aims to entice hardcore video game aficionados. Both devices were designed in collaboration with Asus and will presumably retail at price points that reflect their respective innards. We don't actually know yet, mind, because Microsoft didn't actually state how much they'll cost. You have the feel that's where the company really needs to stick the landing here.Related:Switch 2 tops 3.5 million sales to deliver Nintendo's biggest console launchvia Game Developer // Four days. That's all it took for the Switch 2 to shift over 3.5 million units worldwide to deliver Nintendo's biggest console launch ever. The original Switch needed a month to reach 2.74 million sales by contrast, while the PS5 needed two months to sell 4.5 million units worldwide. Xbox sales remain a mystery because Microsoft just doesn't talk about that sort of thing anymore, which is decidedly frustrating for those oddballswho actually enjoy sifting through financial documents in search of those juicy juicy numbers.Inside the ‘Dragon Age’ Debacle That Gutted EA’s BioWare Studiovia Bloomberg// How do you kill a franchise like Dragon Age and leave a studio with the pedigree of BioWare in turmoil? According to a new report from Bloomberg, the answer will likely resonate with developers across the industry: corporate meddling. Sources speaking to the publication explained how Dragon Age: The Veilguard, which failed to meet the expectations of parent company EA, was in constant disarray because the American publisher couldn't decide whether it should be a live-service or single player title. Indecision from leadership within EA and an eventual pivot away from the live-service model only caused more confusion, with BioWare being told to implement foundational changes within impossible timelines. It's a story that's all the more alarming because of how familiar it feels.Related:Sony is making layoffs at Days Gone developer Bend Studiovia Game Developer // Sony has continued its Tony Award-winning tun as the Grim Reaper by cutting even more jobs within PlayStation Studios. Days Gone developer Bend Studio was the latest casualty, with the first-party developer confirming a number of employees were laid off just months after the cancellation of a live-service project. Sony didn't confirm how many people lost their jobs, but Bloomberg reporter Jason Schreier heard that around 40 peoplewere let go. Embracer CEO Lars Wingefors to become executive chair and focus on M&Avia Game Developer // Somewhere, in a deep dark corner of the world, the monkey's paw has curled. Embracer CEO Lars Wingefors, who demonstrated his leadership nous by spending years embarking on a colossal merger and acquisition spree only to immediately start downsizing, has announced he'll be stepping down as CEO. The catch? Wingefors is currently proposed to be appointed executive chair of the board of Embracer. In his new role, he'll apparently focus on strategic initiatives, capital allocation, and mergers and acquisitions. And people wonder why satire is dead. Related:Hong Kong Outlaws a Video Game, Saying It Promotes 'Armed Revolution'via The New York Times// National security police in Hong Kong have banned a Taiwanese video game called Reversed Front: Bonfire for supposedly "advocating armed revolution." Authorities in the region warned that anybody who downloads or recommends the online strategy title will face serious legal charges. The game has been pulled from Apple's marketplace in Hong Kong but is still available for download elsewhere. It was never available in mainland China. Developer ESC Taiwan, part of an group of volunteers who are vocal detractors of China's Communist Party, thanked Hong Kong authorities for the free publicity in a social media post and said the ban shows how political censorship remains prominent in the territory. RuneScape developer accused of ‘catering to American conservatism’ by rolling back Pride Month eventsvia PinkNews // Runescape developers inside Jagex have reportedly been left reeling after the studio decided to pivot away from Pride Month content to focus more on "what players wanted." Jagex CEO broke the news to staff with a post on an internal message board, prompting a rush of complaints—with many workers explaining the content was either already complete or easy to implement. Though Jagex is based in the UK, it's parent company CVC Capital Partners operates multiple companies in the United States. It's a situation that left one employee who spoke to PinkNews questioning whether the studio has caved to "American conservatism." SAG-AFTRA suspends strike and instructs union members to return to workvia Game Developer // It has taken almost a year, but performer union SAG-AFTRA has finally suspended strike action and instructed members to return to work. The decision comes after protracted negotiations with major studios who employ performers under the Interactive Media Agreement. SAG-AFTRA had been striking to secure better working conditions and AI protections for its members, and feels it has now secured a deal that will install vital "AI guardrails."A Switch 2 exclusive Splatoon spinoff was just shadow-announced on Nintendo Todayvia Game Developer // Nintendo did something peculiar this week when it unveiled a Splatoon spinoff out of the blue. That in itself might not sound too strange, but for a short window the announcement was only accessible via the company's new Nintendo Today mobile app. It's a situation that left people without access to the app questioning whether the news was even real. Nintendo Today prevented users from capturing screenshots or footage, only adding to the sense of confusion. It led to this reporter branding the move a "shadow announcement," which in turn left some of our readers perplexed. Can you ever announce and announcement? What does that term even mean? Food for thought. A wonderful new Big Walk trailer melted this reporter's heartvia House House//  The mad lads behind Untitled Goose Game are back with a new jaunt called Big Walk. This one has been on my radar for a while, but the studio finally debuted a gameplay overview during Summer Game Fest and it looks extraordinary in its purity. It's about walking and talking—and therein lies the charm. Players are forced to cooperate to navigate a lush open world, solve puzzles, and embark upon hijinks. Proximity-based communication is the core mechanic in Big Walk—whether that takes the form of voice chat, written text, hand signals, blazing flares, or pictograms—and it looks like it'll lead to all sorts of weird and wonderful antics. It's a pitch that cuts through because it's so unashamedly different, and there's a lot to love about that. I'm looking forward to this one. #patch #notes #xbox #debuts #its
    Patch Notes #9: Xbox debuts its first handhelds, Hong Kong authorities ban a video game, and big hopes for Big Walk
    www.gamedeveloper.com
    We did it gang. We completed another week in the impossible survival sim that is real life. Give yourself a appreciative pat on the back and gaze wistfully towards whatever adventures or blissful respite the weekend might bring.This week I've mostly been recovering from my birthday celebrations, which entailed a bountiful Korean Barbecue that left me with a rampant case of the meat sweats and a pub crawl around one of Manchester's finest suburbs. There was no time for video games, but that's not always a bad thing. Distance makes the heart grow fonder, after all.I was welcomed back to the imaginary office with a news bludgeon to the face. The headlines this week have come thick and fast, bringing hardware announcements, more layoffs, and some notable sales milestones. As always, there's a lot to digest, so let's venture once more into the fray. The first Xbox handhelds have finally arrivedvia Game Developer // Microsoft finally stopped flirting with the idea of launching a handheld this week and unveiled not one, but two devices called the ROG Xbox Ally and ROG Xbox Ally X. The former is pitched towards casual players, while the latter aims to entice hardcore video game aficionados. Both devices were designed in collaboration with Asus and will presumably retail at price points that reflect their respective innards. We don't actually know yet, mind, because Microsoft didn't actually state how much they'll cost. You have the feel that's where the company really needs to stick the landing here.Related:Switch 2 tops 3.5 million sales to deliver Nintendo's biggest console launchvia Game Developer // Four days. That's all it took for the Switch 2 to shift over 3.5 million units worldwide to deliver Nintendo's biggest console launch ever. The original Switch needed a month to reach 2.74 million sales by contrast, while the PS5 needed two months to sell 4.5 million units worldwide. Xbox sales remain a mystery because Microsoft just doesn't talk about that sort of thing anymore, which is decidedly frustrating for those oddballs (read: this writer) who actually enjoy sifting through financial documents in search of those juicy juicy numbers.Inside the ‘Dragon Age’ Debacle That Gutted EA’s BioWare Studiovia Bloomberg (paywalled) // How do you kill a franchise like Dragon Age and leave a studio with the pedigree of BioWare in turmoil? According to a new report from Bloomberg, the answer will likely resonate with developers across the industry: corporate meddling. Sources speaking to the publication explained how Dragon Age: The Veilguard, which failed to meet the expectations of parent company EA, was in constant disarray because the American publisher couldn't decide whether it should be a live-service or single player title. Indecision from leadership within EA and an eventual pivot away from the live-service model only caused more confusion, with BioWare being told to implement foundational changes within impossible timelines. It's a story that's all the more alarming because of how familiar it feels.Related:Sony is making layoffs at Days Gone developer Bend Studiovia Game Developer // Sony has continued its Tony Award-winning tun as the Grim Reaper by cutting even more jobs within PlayStation Studios. Days Gone developer Bend Studio was the latest casualty, with the first-party developer confirming a number of employees were laid off just months after the cancellation of a live-service project. Sony didn't confirm how many people lost their jobs, but Bloomberg reporter Jason Schreier heard that around 40 people (roughly 30 percent of the studio's headcount) were let go. Embracer CEO Lars Wingefors to become executive chair and focus on M&Avia Game Developer // Somewhere, in a deep dark corner of the world, the monkey's paw has curled. Embracer CEO Lars Wingefors, who demonstrated his leadership nous by spending years embarking on a colossal merger and acquisition spree only to immediately start downsizing, has announced he'll be stepping down as CEO. The catch? Wingefors is currently proposed to be appointed executive chair of the board of Embracer. In his new role, he'll apparently focus on strategic initiatives, capital allocation, and mergers and acquisitions. And people wonder why satire is dead. Related:Hong Kong Outlaws a Video Game, Saying It Promotes 'Armed Revolution'via The New York Times (paywalled) // National security police in Hong Kong have banned a Taiwanese video game called Reversed Front: Bonfire for supposedly "advocating armed revolution." Authorities in the region warned that anybody who downloads or recommends the online strategy title will face serious legal charges. The game has been pulled from Apple's marketplace in Hong Kong but is still available for download elsewhere. It was never available in mainland China. Developer ESC Taiwan, part of an group of volunteers who are vocal detractors of China's Communist Party, thanked Hong Kong authorities for the free publicity in a social media post and said the ban shows how political censorship remains prominent in the territory. RuneScape developer accused of ‘catering to American conservatism’ by rolling back Pride Month eventsvia PinkNews // Runescape developers inside Jagex have reportedly been left reeling after the studio decided to pivot away from Pride Month content to focus more on "what players wanted." Jagex CEO broke the news to staff with a post on an internal message board, prompting a rush of complaints—with many workers explaining the content was either already complete or easy to implement. Though Jagex is based in the UK, it's parent company CVC Capital Partners operates multiple companies in the United States. It's a situation that left one employee who spoke to PinkNews questioning whether the studio has caved to "American conservatism." SAG-AFTRA suspends strike and instructs union members to return to workvia Game Developer // It has taken almost a year, but performer union SAG-AFTRA has finally suspended strike action and instructed members to return to work. The decision comes after protracted negotiations with major studios who employ performers under the Interactive Media Agreement. SAG-AFTRA had been striking to secure better working conditions and AI protections for its members, and feels it has now secured a deal that will install vital "AI guardrails."A Switch 2 exclusive Splatoon spinoff was just shadow-announced on Nintendo Todayvia Game Developer // Nintendo did something peculiar this week when it unveiled a Splatoon spinoff out of the blue. That in itself might not sound too strange, but for a short window the announcement was only accessible via the company's new Nintendo Today mobile app. It's a situation that left people without access to the app questioning whether the news was even real. Nintendo Today prevented users from capturing screenshots or footage, only adding to the sense of confusion. It led to this reporter branding the move a "shadow announcement," which in turn left some of our readers perplexed. Can you ever announce and announcement? What does that term even mean? Food for thought. A wonderful new Big Walk trailer melted this reporter's heartvia House House (YouTube) //  The mad lads behind Untitled Goose Game are back with a new jaunt called Big Walk. This one has been on my radar for a while, but the studio finally debuted a gameplay overview during Summer Game Fest and it looks extraordinary in its purity. It's about walking and talking—and therein lies the charm. Players are forced to cooperate to navigate a lush open world, solve puzzles, and embark upon hijinks. Proximity-based communication is the core mechanic in Big Walk—whether that takes the form of voice chat, written text, hand signals, blazing flares, or pictograms—and it looks like it'll lead to all sorts of weird and wonderful antics. It's a pitch that cuts through because it's so unashamedly different, and there's a lot to love about that. I'm looking forward to this one.
    Like
    Love
    Wow
    Sad
    Angry
    524
    · 0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • 8 Stunning Sunset Color Palettes

    8 Stunning Sunset Color Palettes
    Zoe Santoro • 

    In this article:See more ▼Post may contain affiliate links which give us commissions at no cost to you.There’s something absolutely magical about watching the sun dip below the horizon, painting the sky in breathtaking hues that seem almost too beautiful to be real. As a designer, I find myself constantly inspired by these natural masterpieces that unfold before us every evening. The way warm oranges melt into soft pinks, how deep purples blend seamlessly with golden yellows – it’s like nature’s own masterclass in color theory.
    If you’re looking to infuse your next project with the warmth, romance, and natural beauty of a perfect sunset, you’ve come to the right place. I’ve curated eight of the most captivating sunset color palettes that will bring that golden hour magic directly into your designs.
    Psst... Did you know you can get unlimited downloads of 59,000+ fonts and millions of other creative assets for just /mo? Learn more »The 8 Most Breathtaking Sunset Color Palettes
    1. Golden Hour Glow

    #FFD700

    #FF8C00

    #FF6347

    #CD5C5C

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    This palette captures that perfect moment when everything seems to be touched by liquid gold. The warm yellows transition beautifully into rich oranges and soft coral reds, creating a sense of warmth and optimism that’s impossible to ignore. I find this combination works wonderfully for brands that want to evoke feelings of happiness, energy, and positivity.
    2. Tropical Paradise

    #FF69B4

    #FF1493

    #FF8C00

    #FFD700

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    Inspired by those incredible sunsets you see in tropical destinations, this vibrant palette combines hot pinks with brilliant oranges and golden yellows. It’s bold, it’s energetic, and it’s perfect for projects that need to make a statement. I love using these colors for summer campaigns or anything that needs to capture that vacation feeling.
    3. Desert Dreams

    #CD853F

    #D2691E

    #B22222

    #8B0000

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    Get 300+ Fonts for FREEEnter your email to download our 100% free "Font Lover's Bundle". For commercial & personal use. No royalties. No fees. No attribution. 100% free to use anywhere.

    The American Southwest produces some of the most spectacular sunsets on earth, and this palette pays homage to those incredible desert skies. The earthy browns blend into warm oranges before deepening into rich reds and burgundies. This combination brings a sense of grounding and authenticity that works beautifully for rustic or heritage brands.
    4. Pastel Evening

    #FFE4E1

    #FFA07A

    #F0E68C

    #DDA0DD

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    Not every sunset needs to be bold and dramatic. This softer palette captures those gentle, dreamy evenings when the sky looks like it’s been painted with watercolors. The delicate pinks, peaches, and lavenders create a romantic, ethereal feeling that’s perfect for wedding designs, beauty brands, or any project that needs a touch of feminine elegance.
    5. Coastal Sunset

    #fae991

    #FF7F50

    #FF6347

    #4169E1

    #1E90FF

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    There’s something special about watching the sun set over the ocean, where warm oranges and corals meet the deep blues of the sea and sky. This palette captures that perfect contrast between warm and cool tones. I find it creates a sense of adventure and wanderlust that’s ideal for travel brands or outdoor companies.
    6. Urban Twilight

    #ffeda3

    #fdad52

    #fc8a6e

    #575475

    #111f2a

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    As the sun sets behind city skylines, you get these incredible contrasts between deep purples and vibrant oranges. This sophisticated palette brings together the mystery of twilight with the warmth of the setting sun. It’s perfect for creating designs that feel both modern and dramatic.
    7. Autumn Harvest

    #FF4500

    #FF8C00

    #DAA520

    #8B4513

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    This palette captures those perfect fall evenings when the sunset seems to echo the changing leaves. The deep oranges and golden yellows create a cozy, inviting feeling that’s perfect for seasonal campaigns or brands that want to evoke comfort and tradition.
    8. Fire Sky

    #652220

    #DC143C

    #FF0000

    #FF4500

    #FF8C00

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    Sometimes nature puts on a show that’s so intense it takes your breath away. This bold, fiery palette captures those dramatic sunsets that look like the sky is literally on fire. It’s not for the faint of heart, but when you need maximum impact and energy, these colors deliver in spades.
    Why Sunset Colors Never Go Out of Style
    Before we explore how to use these palettes effectively, let’s talk about why sunset colors have such enduring appeal in design. There’s something deeply ingrained in human psychology that responds to these warm, glowing hues. They remind us of endings and beginnings, of peaceful moments and natural beauty.
    From a design perspective, sunset colors offer incredible versatility. They can be bold and energetic or soft and romantic. They work equally well for corporate branding and personal projects. And perhaps most importantly, they’re inherently optimistic – they make people feel good.
    I’ve found that incorporating sunset-inspired colors into modern projects adds an instant sense of warmth and approachability that resonates with audiences across all demographics. Whether you’re working on packaging design, web interfaces, or environmental graphics, these palettes can help create an emotional connection that goes beyond mere aesthetics.
    How to Master Sunset Palettes in Contemporary Design
    Using sunset colors effectively requires more than just picking pretty hues and hoping for the best. Here are some strategies I’ve developed for incorporating these palettes into modern design work:
    Start with Temperature Balance
    One of the most important aspects of working with sunset palettes is understanding color temperature. Most sunset combinations naturally include both warm and cool elements – the warm oranges and yellows of the sun itself, balanced by the cooler purples and blues of the surrounding sky. Maintaining this temperature balance keeps your designs from feeling flat or monotonous.
    Layer for Depth
    Real sunsets have incredible depth and dimension, with colors layering and blending into each other. Try to recreate this in your designs by using gradients, overlays, or layered elements rather than flat blocks of color. This approach creates visual interest and mimics the natural way these colors appear in nature.
    Consider Context and Contrast
    While sunset colors are beautiful, they need to work within the context of your overall design. Pay attention to readability – text needs sufficient contrast against sunset backgrounds. Consider using neutrals like deep charcoal or cream to provide breathing room and ensure your message remains clear.
    Embrace Gradual Transitions
    The magic of a sunset lies in how colors flow seamlessly from one to another. Incorporate this principle into your designs through smooth gradients, subtle color shifts, or elements that bridge between different hues in your palette.
    The Science Behind Our Sunset Obsession
    As someone who’s spent years studying color psychology, I’m fascinated by why sunset colors have such universal appeal. Research suggests that warm colors like those found in sunsets trigger positive emotional responses and can even increase feelings of comfort and security.
    There’s also the association factor – sunsets are linked in our minds with relaxation, beauty, and positive experiences. When we see these colors in design, we unconsciously associate them with those same positive feelings. This makes sunset palettes particularly effective for brands that want to create emotional connections with their audiences.
    The cyclical nature of sunsets also plays a role. They happen every day, marking the transition from activity to rest, from work to leisure. This gives sunset colors a sense of familiarity and comfort that few other color combinations can match.
    Applying Sunset Palettes Across Design Disciplines
    One of the things I love most about sunset color palettes is how adaptable they are across different types of design work:
    Brand Identity Design
    Sunset colors can help brands convey warmth, optimism, and approachability. I’ve used variations of these palettes for everything from artisanal food companies to wellness brands. The key is choosing the right intensity level for your brand’s personality – softer palettes for more refined brands, bolder combinations for companies that want to make a statement.
    Digital Design
    In web and app design, sunset colors can create interfaces that feel warm and inviting rather than cold and clinical. I often use these palettes for backgrounds, accent elements, or call-to-action buttons. The natural flow between colors makes them perfect for creating smooth user experiences that guide the eye naturally through content.
    Print and Packaging
    Sunset palettes really shine in print applications where you can take advantage of rich, saturated colors. They work beautifully for packaging design, particularly for products associated with warmth, comfort, or natural ingredients. The key is ensuring your color reproduction is accurate – sunset colors can look muddy if not handled properly in print.
    Environmental Design
    In spaces, sunset colors can create incredibly welcoming environments. I’ve seen these palettes used effectively in restaurants, retail spaces, and even corporate offices where the goal is to create a sense of warmth and community.
    Seasonal Considerations and Trending Applications
    While sunset colors are timeless, they do have natural seasonal associations that smart designers can leverage. The warmer, more intense sunset palettes work beautifully for fall and winter campaigns, while the softer, more pastel variations are perfect for spring and summer applications.
    I’ve noticed a growing trend toward using sunset palettes in unexpected contexts – tech companies embracing warm gradients, financial services using sunset colors to appear more approachable, and healthcare brands incorporating these hues to create more comforting environments.
    Conclusion: Bringing Natural Beauty Into Modern Design
    As we’ve explored these eight stunning sunset color palettes, I hope you’ve gained new appreciation for the incredible design potential that nature provides us every single day. These colors aren’t just beautiful – they’re powerful tools for creating emotional connections, conveying brand values, and making designs that truly resonate with people.
    The secret to successfully using sunset palettes lies in understanding both their emotional impact and their technical requirements. Don’t be afraid to experiment with different combinations and intensities, but always keep your audience and context in mind.
    Remember, the best sunset colors aren’t just about picking the prettiest hues – they’re about capturing the feeling of those magical moments when day transitions to night. Whether you’re creating a logo that needs to convey warmth and trust, designing a website that should feel welcoming and approachable, or developing packaging that needs to stand out on crowded shelves, these sunset-inspired palettes offer endless possibilities.
    So the next time you catch yourself stopped in your tracks by a particularly stunning sunset, take a moment to really study those colors. Notice how they blend and flow, how they make you feel, and how they change as the light shifts. Then bring that natural magic into your next design project.
    After all, if nature can create such breathtaking color combinations every single day, imagine what we can achieve when we learn from the master. Happy designing!

    Zoe Santoro

    Zoe is an art student and graphic designer with a passion for creativity and adventure. Whether she’s sketching in a cozy café or capturing inspiration from vibrant cityscapes, she finds beauty in every corner of the world. With a love for bold colors, clean design, and storytelling through visuals, Zoe blends her artistic skills with her wanderlust to create stunning, travel-inspired designs. Follow her journey as she explores new places, discovers fresh inspiration, and shares her creative process along the way.

    10 Warm Color Palettes That’ll Brighten Your DayThere’s nothing quite like the embracing quality of warm colors to make a design feel inviting and alive. As someone...These 1920s Color Palettes are ‘Greater than Gatsby’There’s something undeniably captivating about the color schemes of the Roaring Twenties. As a designer with a passion for historical...How Fonts Influence Tone and Clarity in Animated VideosAudiences interact differently with messages based on which fonts designers choose to use within a text presentation. Fonts shape how...
    #stunning #sunset #color #palettes
    8 Stunning Sunset Color Palettes
    8 Stunning Sunset Color Palettes Zoe Santoro •  In this article:See more ▼Post may contain affiliate links which give us commissions at no cost to you.There’s something absolutely magical about watching the sun dip below the horizon, painting the sky in breathtaking hues that seem almost too beautiful to be real. As a designer, I find myself constantly inspired by these natural masterpieces that unfold before us every evening. The way warm oranges melt into soft pinks, how deep purples blend seamlessly with golden yellows – it’s like nature’s own masterclass in color theory. If you’re looking to infuse your next project with the warmth, romance, and natural beauty of a perfect sunset, you’ve come to the right place. I’ve curated eight of the most captivating sunset color palettes that will bring that golden hour magic directly into your designs. 👋 Psst... Did you know you can get unlimited downloads of 59,000+ fonts and millions of other creative assets for just /mo? Learn more »The 8 Most Breathtaking Sunset Color Palettes 1. Golden Hour Glow #FFD700 #FF8C00 #FF6347 #CD5C5C Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper This palette captures that perfect moment when everything seems to be touched by liquid gold. The warm yellows transition beautifully into rich oranges and soft coral reds, creating a sense of warmth and optimism that’s impossible to ignore. I find this combination works wonderfully for brands that want to evoke feelings of happiness, energy, and positivity. 2. Tropical Paradise #FF69B4 #FF1493 #FF8C00 #FFD700 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Inspired by those incredible sunsets you see in tropical destinations, this vibrant palette combines hot pinks with brilliant oranges and golden yellows. It’s bold, it’s energetic, and it’s perfect for projects that need to make a statement. I love using these colors for summer campaigns or anything that needs to capture that vacation feeling. 3. Desert Dreams #CD853F #D2691E #B22222 #8B0000 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Get 300+ Fonts for FREEEnter your email to download our 100% free "Font Lover's Bundle". For commercial & personal use. No royalties. No fees. No attribution. 100% free to use anywhere. The American Southwest produces some of the most spectacular sunsets on earth, and this palette pays homage to those incredible desert skies. The earthy browns blend into warm oranges before deepening into rich reds and burgundies. This combination brings a sense of grounding and authenticity that works beautifully for rustic or heritage brands. 4. Pastel Evening #FFE4E1 #FFA07A #F0E68C #DDA0DD Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Not every sunset needs to be bold and dramatic. This softer palette captures those gentle, dreamy evenings when the sky looks like it’s been painted with watercolors. The delicate pinks, peaches, and lavenders create a romantic, ethereal feeling that’s perfect for wedding designs, beauty brands, or any project that needs a touch of feminine elegance. 5. Coastal Sunset #fae991 #FF7F50 #FF6347 #4169E1 #1E90FF Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper There’s something special about watching the sun set over the ocean, where warm oranges and corals meet the deep blues of the sea and sky. This palette captures that perfect contrast between warm and cool tones. I find it creates a sense of adventure and wanderlust that’s ideal for travel brands or outdoor companies. 6. Urban Twilight #ffeda3 #fdad52 #fc8a6e #575475 #111f2a Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper As the sun sets behind city skylines, you get these incredible contrasts between deep purples and vibrant oranges. This sophisticated palette brings together the mystery of twilight with the warmth of the setting sun. It’s perfect for creating designs that feel both modern and dramatic. 7. Autumn Harvest #FF4500 #FF8C00 #DAA520 #8B4513 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper This palette captures those perfect fall evenings when the sunset seems to echo the changing leaves. The deep oranges and golden yellows create a cozy, inviting feeling that’s perfect for seasonal campaigns or brands that want to evoke comfort and tradition. 8. Fire Sky #652220 #DC143C #FF0000 #FF4500 #FF8C00 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Sometimes nature puts on a show that’s so intense it takes your breath away. This bold, fiery palette captures those dramatic sunsets that look like the sky is literally on fire. It’s not for the faint of heart, but when you need maximum impact and energy, these colors deliver in spades. Why Sunset Colors Never Go Out of Style Before we explore how to use these palettes effectively, let’s talk about why sunset colors have such enduring appeal in design. There’s something deeply ingrained in human psychology that responds to these warm, glowing hues. They remind us of endings and beginnings, of peaceful moments and natural beauty. From a design perspective, sunset colors offer incredible versatility. They can be bold and energetic or soft and romantic. They work equally well for corporate branding and personal projects. And perhaps most importantly, they’re inherently optimistic – they make people feel good. I’ve found that incorporating sunset-inspired colors into modern projects adds an instant sense of warmth and approachability that resonates with audiences across all demographics. Whether you’re working on packaging design, web interfaces, or environmental graphics, these palettes can help create an emotional connection that goes beyond mere aesthetics. How to Master Sunset Palettes in Contemporary Design Using sunset colors effectively requires more than just picking pretty hues and hoping for the best. Here are some strategies I’ve developed for incorporating these palettes into modern design work: Start with Temperature Balance One of the most important aspects of working with sunset palettes is understanding color temperature. Most sunset combinations naturally include both warm and cool elements – the warm oranges and yellows of the sun itself, balanced by the cooler purples and blues of the surrounding sky. Maintaining this temperature balance keeps your designs from feeling flat or monotonous. Layer for Depth Real sunsets have incredible depth and dimension, with colors layering and blending into each other. Try to recreate this in your designs by using gradients, overlays, or layered elements rather than flat blocks of color. This approach creates visual interest and mimics the natural way these colors appear in nature. Consider Context and Contrast While sunset colors are beautiful, they need to work within the context of your overall design. Pay attention to readability – text needs sufficient contrast against sunset backgrounds. Consider using neutrals like deep charcoal or cream to provide breathing room and ensure your message remains clear. Embrace Gradual Transitions The magic of a sunset lies in how colors flow seamlessly from one to another. Incorporate this principle into your designs through smooth gradients, subtle color shifts, or elements that bridge between different hues in your palette. The Science Behind Our Sunset Obsession As someone who’s spent years studying color psychology, I’m fascinated by why sunset colors have such universal appeal. Research suggests that warm colors like those found in sunsets trigger positive emotional responses and can even increase feelings of comfort and security. There’s also the association factor – sunsets are linked in our minds with relaxation, beauty, and positive experiences. When we see these colors in design, we unconsciously associate them with those same positive feelings. This makes sunset palettes particularly effective for brands that want to create emotional connections with their audiences. The cyclical nature of sunsets also plays a role. They happen every day, marking the transition from activity to rest, from work to leisure. This gives sunset colors a sense of familiarity and comfort that few other color combinations can match. Applying Sunset Palettes Across Design Disciplines One of the things I love most about sunset color palettes is how adaptable they are across different types of design work: Brand Identity Design Sunset colors can help brands convey warmth, optimism, and approachability. I’ve used variations of these palettes for everything from artisanal food companies to wellness brands. The key is choosing the right intensity level for your brand’s personality – softer palettes for more refined brands, bolder combinations for companies that want to make a statement. Digital Design In web and app design, sunset colors can create interfaces that feel warm and inviting rather than cold and clinical. I often use these palettes for backgrounds, accent elements, or call-to-action buttons. The natural flow between colors makes them perfect for creating smooth user experiences that guide the eye naturally through content. Print and Packaging Sunset palettes really shine in print applications where you can take advantage of rich, saturated colors. They work beautifully for packaging design, particularly for products associated with warmth, comfort, or natural ingredients. The key is ensuring your color reproduction is accurate – sunset colors can look muddy if not handled properly in print. Environmental Design In spaces, sunset colors can create incredibly welcoming environments. I’ve seen these palettes used effectively in restaurants, retail spaces, and even corporate offices where the goal is to create a sense of warmth and community. Seasonal Considerations and Trending Applications While sunset colors are timeless, they do have natural seasonal associations that smart designers can leverage. The warmer, more intense sunset palettes work beautifully for fall and winter campaigns, while the softer, more pastel variations are perfect for spring and summer applications. I’ve noticed a growing trend toward using sunset palettes in unexpected contexts – tech companies embracing warm gradients, financial services using sunset colors to appear more approachable, and healthcare brands incorporating these hues to create more comforting environments. Conclusion: Bringing Natural Beauty Into Modern Design As we’ve explored these eight stunning sunset color palettes, I hope you’ve gained new appreciation for the incredible design potential that nature provides us every single day. These colors aren’t just beautiful – they’re powerful tools for creating emotional connections, conveying brand values, and making designs that truly resonate with people. The secret to successfully using sunset palettes lies in understanding both their emotional impact and their technical requirements. Don’t be afraid to experiment with different combinations and intensities, but always keep your audience and context in mind. Remember, the best sunset colors aren’t just about picking the prettiest hues – they’re about capturing the feeling of those magical moments when day transitions to night. Whether you’re creating a logo that needs to convey warmth and trust, designing a website that should feel welcoming and approachable, or developing packaging that needs to stand out on crowded shelves, these sunset-inspired palettes offer endless possibilities. So the next time you catch yourself stopped in your tracks by a particularly stunning sunset, take a moment to really study those colors. Notice how they blend and flow, how they make you feel, and how they change as the light shifts. Then bring that natural magic into your next design project. After all, if nature can create such breathtaking color combinations every single day, imagine what we can achieve when we learn from the master. Happy designing! Zoe Santoro Zoe is an art student and graphic designer with a passion for creativity and adventure. Whether she’s sketching in a cozy café or capturing inspiration from vibrant cityscapes, she finds beauty in every corner of the world. With a love for bold colors, clean design, and storytelling through visuals, Zoe blends her artistic skills with her wanderlust to create stunning, travel-inspired designs. Follow her journey as she explores new places, discovers fresh inspiration, and shares her creative process along the way. 10 Warm Color Palettes That’ll Brighten Your DayThere’s nothing quite like the embracing quality of warm colors to make a design feel inviting and alive. As someone...These 1920s Color Palettes are ‘Greater than Gatsby’There’s something undeniably captivating about the color schemes of the Roaring Twenties. As a designer with a passion for historical...How Fonts Influence Tone and Clarity in Animated VideosAudiences interact differently with messages based on which fonts designers choose to use within a text presentation. Fonts shape how... #stunning #sunset #color #palettes
    8 Stunning Sunset Color Palettes
    designworklife.com
    8 Stunning Sunset Color Palettes Zoe Santoro •  In this article:See more ▼Post may contain affiliate links which give us commissions at no cost to you.There’s something absolutely magical about watching the sun dip below the horizon, painting the sky in breathtaking hues that seem almost too beautiful to be real. As a designer, I find myself constantly inspired by these natural masterpieces that unfold before us every evening. The way warm oranges melt into soft pinks, how deep purples blend seamlessly with golden yellows – it’s like nature’s own masterclass in color theory. If you’re looking to infuse your next project with the warmth, romance, and natural beauty of a perfect sunset, you’ve come to the right place. I’ve curated eight of the most captivating sunset color palettes that will bring that golden hour magic directly into your designs. 👋 Psst... Did you know you can get unlimited downloads of 59,000+ fonts and millions of other creative assets for just $16.95/mo? Learn more »The 8 Most Breathtaking Sunset Color Palettes 1. Golden Hour Glow #FFD700 #FF8C00 #FF6347 #CD5C5C Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper This palette captures that perfect moment when everything seems to be touched by liquid gold. The warm yellows transition beautifully into rich oranges and soft coral reds, creating a sense of warmth and optimism that’s impossible to ignore. I find this combination works wonderfully for brands that want to evoke feelings of happiness, energy, and positivity. 2. Tropical Paradise #FF69B4 #FF1493 #FF8C00 #FFD700 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Inspired by those incredible sunsets you see in tropical destinations, this vibrant palette combines hot pinks with brilliant oranges and golden yellows. It’s bold, it’s energetic, and it’s perfect for projects that need to make a statement. I love using these colors for summer campaigns or anything that needs to capture that vacation feeling. 3. Desert Dreams #CD853F #D2691E #B22222 #8B0000 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Get 300+ Fonts for FREEEnter your email to download our 100% free "Font Lover's Bundle". For commercial & personal use. No royalties. No fees. No attribution. 100% free to use anywhere. The American Southwest produces some of the most spectacular sunsets on earth, and this palette pays homage to those incredible desert skies. The earthy browns blend into warm oranges before deepening into rich reds and burgundies. This combination brings a sense of grounding and authenticity that works beautifully for rustic or heritage brands. 4. Pastel Evening #FFE4E1 #FFA07A #F0E68C #DDA0DD Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Not every sunset needs to be bold and dramatic. This softer palette captures those gentle, dreamy evenings when the sky looks like it’s been painted with watercolors. The delicate pinks, peaches, and lavenders create a romantic, ethereal feeling that’s perfect for wedding designs, beauty brands, or any project that needs a touch of feminine elegance. 5. Coastal Sunset #fae991 #FF7F50 #FF6347 #4169E1 #1E90FF Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper There’s something special about watching the sun set over the ocean, where warm oranges and corals meet the deep blues of the sea and sky. This palette captures that perfect contrast between warm and cool tones. I find it creates a sense of adventure and wanderlust that’s ideal for travel brands or outdoor companies. 6. Urban Twilight #ffeda3 #fdad52 #fc8a6e #575475 #111f2a Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper As the sun sets behind city skylines, you get these incredible contrasts between deep purples and vibrant oranges. This sophisticated palette brings together the mystery of twilight with the warmth of the setting sun. It’s perfect for creating designs that feel both modern and dramatic. 7. Autumn Harvest #FF4500 #FF8C00 #DAA520 #8B4513 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper This palette captures those perfect fall evenings when the sunset seems to echo the changing leaves. The deep oranges and golden yellows create a cozy, inviting feeling that’s perfect for seasonal campaigns or brands that want to evoke comfort and tradition. 8. Fire Sky #652220 #DC143C #FF0000 #FF4500 #FF8C00 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Sometimes nature puts on a show that’s so intense it takes your breath away. This bold, fiery palette captures those dramatic sunsets that look like the sky is literally on fire. It’s not for the faint of heart, but when you need maximum impact and energy, these colors deliver in spades. Why Sunset Colors Never Go Out of Style Before we explore how to use these palettes effectively, let’s talk about why sunset colors have such enduring appeal in design. There’s something deeply ingrained in human psychology that responds to these warm, glowing hues. They remind us of endings and beginnings, of peaceful moments and natural beauty. From a design perspective, sunset colors offer incredible versatility. They can be bold and energetic or soft and romantic. They work equally well for corporate branding and personal projects. And perhaps most importantly, they’re inherently optimistic – they make people feel good. I’ve found that incorporating sunset-inspired colors into modern projects adds an instant sense of warmth and approachability that resonates with audiences across all demographics. Whether you’re working on packaging design, web interfaces, or environmental graphics, these palettes can help create an emotional connection that goes beyond mere aesthetics. How to Master Sunset Palettes in Contemporary Design Using sunset colors effectively requires more than just picking pretty hues and hoping for the best. Here are some strategies I’ve developed for incorporating these palettes into modern design work: Start with Temperature Balance One of the most important aspects of working with sunset palettes is understanding color temperature. Most sunset combinations naturally include both warm and cool elements – the warm oranges and yellows of the sun itself, balanced by the cooler purples and blues of the surrounding sky. Maintaining this temperature balance keeps your designs from feeling flat or monotonous. Layer for Depth Real sunsets have incredible depth and dimension, with colors layering and blending into each other. Try to recreate this in your designs by using gradients, overlays, or layered elements rather than flat blocks of color. This approach creates visual interest and mimics the natural way these colors appear in nature. Consider Context and Contrast While sunset colors are beautiful, they need to work within the context of your overall design. Pay attention to readability – text needs sufficient contrast against sunset backgrounds. Consider using neutrals like deep charcoal or cream to provide breathing room and ensure your message remains clear. Embrace Gradual Transitions The magic of a sunset lies in how colors flow seamlessly from one to another. Incorporate this principle into your designs through smooth gradients, subtle color shifts, or elements that bridge between different hues in your palette. The Science Behind Our Sunset Obsession As someone who’s spent years studying color psychology, I’m fascinated by why sunset colors have such universal appeal. Research suggests that warm colors like those found in sunsets trigger positive emotional responses and can even increase feelings of comfort and security. There’s also the association factor – sunsets are linked in our minds with relaxation, beauty, and positive experiences. When we see these colors in design, we unconsciously associate them with those same positive feelings. This makes sunset palettes particularly effective for brands that want to create emotional connections with their audiences. The cyclical nature of sunsets also plays a role. They happen every day, marking the transition from activity to rest, from work to leisure. This gives sunset colors a sense of familiarity and comfort that few other color combinations can match. Applying Sunset Palettes Across Design Disciplines One of the things I love most about sunset color palettes is how adaptable they are across different types of design work: Brand Identity Design Sunset colors can help brands convey warmth, optimism, and approachability. I’ve used variations of these palettes for everything from artisanal food companies to wellness brands. The key is choosing the right intensity level for your brand’s personality – softer palettes for more refined brands, bolder combinations for companies that want to make a statement. Digital Design In web and app design, sunset colors can create interfaces that feel warm and inviting rather than cold and clinical. I often use these palettes for backgrounds, accent elements, or call-to-action buttons. The natural flow between colors makes them perfect for creating smooth user experiences that guide the eye naturally through content. Print and Packaging Sunset palettes really shine in print applications where you can take advantage of rich, saturated colors. They work beautifully for packaging design, particularly for products associated with warmth, comfort, or natural ingredients. The key is ensuring your color reproduction is accurate – sunset colors can look muddy if not handled properly in print. Environmental Design In spaces, sunset colors can create incredibly welcoming environments. I’ve seen these palettes used effectively in restaurants, retail spaces, and even corporate offices where the goal is to create a sense of warmth and community. Seasonal Considerations and Trending Applications While sunset colors are timeless, they do have natural seasonal associations that smart designers can leverage. The warmer, more intense sunset palettes work beautifully for fall and winter campaigns, while the softer, more pastel variations are perfect for spring and summer applications. I’ve noticed a growing trend toward using sunset palettes in unexpected contexts – tech companies embracing warm gradients, financial services using sunset colors to appear more approachable, and healthcare brands incorporating these hues to create more comforting environments. Conclusion: Bringing Natural Beauty Into Modern Design As we’ve explored these eight stunning sunset color palettes, I hope you’ve gained new appreciation for the incredible design potential that nature provides us every single day. These colors aren’t just beautiful – they’re powerful tools for creating emotional connections, conveying brand values, and making designs that truly resonate with people. The secret to successfully using sunset palettes lies in understanding both their emotional impact and their technical requirements. Don’t be afraid to experiment with different combinations and intensities, but always keep your audience and context in mind. Remember, the best sunset colors aren’t just about picking the prettiest hues – they’re about capturing the feeling of those magical moments when day transitions to night. Whether you’re creating a logo that needs to convey warmth and trust, designing a website that should feel welcoming and approachable, or developing packaging that needs to stand out on crowded shelves, these sunset-inspired palettes offer endless possibilities. So the next time you catch yourself stopped in your tracks by a particularly stunning sunset, take a moment to really study those colors. Notice how they blend and flow, how they make you feel, and how they change as the light shifts. Then bring that natural magic into your next design project. After all, if nature can create such breathtaking color combinations every single day, imagine what we can achieve when we learn from the master. Happy designing! Zoe Santoro Zoe is an art student and graphic designer with a passion for creativity and adventure. Whether she’s sketching in a cozy café or capturing inspiration from vibrant cityscapes, she finds beauty in every corner of the world. With a love for bold colors, clean design, and storytelling through visuals, Zoe blends her artistic skills with her wanderlust to create stunning, travel-inspired designs. Follow her journey as she explores new places, discovers fresh inspiration, and shares her creative process along the way. 10 Warm Color Palettes That’ll Brighten Your DayThere’s nothing quite like the embracing quality of warm colors to make a design feel inviting and alive. As someone...These 1920s Color Palettes are ‘Greater than Gatsby’There’s something undeniably captivating about the color schemes of the Roaring Twenties. As a designer with a passion for historical...How Fonts Influence Tone and Clarity in Animated VideosAudiences interact differently with messages based on which fonts designers choose to use within a text presentation. Fonts shape how...
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    www.microsoft.com
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Earth’s mantle may have hidden plumes venting heat from its core

    Al Hajar Mountains in OmanL_B_Photography/Shutters​tock
    A section of Earth’s mantle beneath Oman appears to be unusually warm, in what researchers say may be the first known “ghost plume” – a column of hot rock emanating from the lower mantle without apparent volcanic activity on the surface.
    Mantle plumes are mysterious upwellings of molten rock believed to transmit heat from the core-mantle boundary to the Earth’s surface, far from the edges of tectonic plates. There are a dozen or so examples thought to occur underneath the middle of continental plates – for instance, beneath Yellowstone and the East African rift. “But these are all cases where you do have surface volcanism,” says Simone Pilia at the King Fahd University of Petroleum and Minerals in Saudi Arabia. Oman has no such volcanic clues.
    Pilia first came to suspect there was a plume beneath Oman “serendipitously” after he began analysing new seismic data from the region. He observed the velocity of waves generated by distant earthquakes slowed down in a cylindrical area beneath eastern Oman, indicating the rocks there were less rigid than the surrounding material due to high temperatures.
    Other independent seismic measurements showed key boundaries where minerals deep in the Earth change phases in a way consistent with a hot plume. These measurements suggest the plume extends more than 660 kilometres below the surface.
    The presence of a plume could also explain why the region has continued to rise in elevation long after tectonic compression – a geological process where the Earth’s crust is squeezed together – stopped. It also fits with models of what could have caused a shift in the movement of the Indian tectonic plate.
    “The more we gathered evidence, the more we were convinced that it is a plume,” says Pilia, who named the geologic feature the “Dani plume” after his son.

    Unmissable news about our planet delivered straight to your inbox every month.

    Sign up to newsletter

    “It’s plausible” that a plume indeed exists there, says Saskia Goes at Imperial College London, adding the study is “thorough”. However, she points out narrow plumes are notoriously difficult to detect.
    If it does exist, however, the presence of a “ghost plume” contained within the mantle by the relatively thick rocky layer beneath Oman would suggest there are others, says Pilia. “We’re convinced that the Dani plume is not alone.”
    If there are many other hidden plumes, it could mean more heat from the core is flowing directly through the mantle via plumes, rather than through slower convection, says Goes. “It has implications, potentially, for the evolution of the Earth if we get a different estimate of how much heat comes out of the mantle.”
    Journal referenceEarth and Planetary Science Letters DOI: 10.1016/j.epsl.2025.119467
    Topics:
    #earths #mantle #have #hidden #plumes
    Earth’s mantle may have hidden plumes venting heat from its core
    Al Hajar Mountains in OmanL_B_Photography/Shutters​tock A section of Earth’s mantle beneath Oman appears to be unusually warm, in what researchers say may be the first known “ghost plume” – a column of hot rock emanating from the lower mantle without apparent volcanic activity on the surface. Mantle plumes are mysterious upwellings of molten rock believed to transmit heat from the core-mantle boundary to the Earth’s surface, far from the edges of tectonic plates. There are a dozen or so examples thought to occur underneath the middle of continental plates – for instance, beneath Yellowstone and the East African rift. “But these are all cases where you do have surface volcanism,” says Simone Pilia at the King Fahd University of Petroleum and Minerals in Saudi Arabia. Oman has no such volcanic clues. Pilia first came to suspect there was a plume beneath Oman “serendipitously” after he began analysing new seismic data from the region. He observed the velocity of waves generated by distant earthquakes slowed down in a cylindrical area beneath eastern Oman, indicating the rocks there were less rigid than the surrounding material due to high temperatures. Other independent seismic measurements showed key boundaries where minerals deep in the Earth change phases in a way consistent with a hot plume. These measurements suggest the plume extends more than 660 kilometres below the surface. The presence of a plume could also explain why the region has continued to rise in elevation long after tectonic compression – a geological process where the Earth’s crust is squeezed together – stopped. It also fits with models of what could have caused a shift in the movement of the Indian tectonic plate. “The more we gathered evidence, the more we were convinced that it is a plume,” says Pilia, who named the geologic feature the “Dani plume” after his son. Unmissable news about our planet delivered straight to your inbox every month. Sign up to newsletter “It’s plausible” that a plume indeed exists there, says Saskia Goes at Imperial College London, adding the study is “thorough”. However, she points out narrow plumes are notoriously difficult to detect. If it does exist, however, the presence of a “ghost plume” contained within the mantle by the relatively thick rocky layer beneath Oman would suggest there are others, says Pilia. “We’re convinced that the Dani plume is not alone.” If there are many other hidden plumes, it could mean more heat from the core is flowing directly through the mantle via plumes, rather than through slower convection, says Goes. “It has implications, potentially, for the evolution of the Earth if we get a different estimate of how much heat comes out of the mantle.” Journal referenceEarth and Planetary Science Letters DOI: 10.1016/j.epsl.2025.119467 Topics: #earths #mantle #have #hidden #plumes
    Earth’s mantle may have hidden plumes venting heat from its core
    www.newscientist.com
    Al Hajar Mountains in OmanL_B_Photography/Shutters​tock A section of Earth’s mantle beneath Oman appears to be unusually warm, in what researchers say may be the first known “ghost plume” – a column of hot rock emanating from the lower mantle without apparent volcanic activity on the surface. Mantle plumes are mysterious upwellings of molten rock believed to transmit heat from the core-mantle boundary to the Earth’s surface, far from the edges of tectonic plates. There are a dozen or so examples thought to occur underneath the middle of continental plates – for instance, beneath Yellowstone and the East African rift. “But these are all cases where you do have surface volcanism,” says Simone Pilia at the King Fahd University of Petroleum and Minerals in Saudi Arabia. Oman has no such volcanic clues. Pilia first came to suspect there was a plume beneath Oman “serendipitously” after he began analysing new seismic data from the region. He observed the velocity of waves generated by distant earthquakes slowed down in a cylindrical area beneath eastern Oman, indicating the rocks there were less rigid than the surrounding material due to high temperatures. Other independent seismic measurements showed key boundaries where minerals deep in the Earth change phases in a way consistent with a hot plume. These measurements suggest the plume extends more than 660 kilometres below the surface. The presence of a plume could also explain why the region has continued to rise in elevation long after tectonic compression – a geological process where the Earth’s crust is squeezed together – stopped. It also fits with models of what could have caused a shift in the movement of the Indian tectonic plate. “The more we gathered evidence, the more we were convinced that it is a plume,” says Pilia, who named the geologic feature the “Dani plume” after his son. Unmissable news about our planet delivered straight to your inbox every month. Sign up to newsletter “It’s plausible” that a plume indeed exists there, says Saskia Goes at Imperial College London, adding the study is “thorough”. However, she points out narrow plumes are notoriously difficult to detect. If it does exist, however, the presence of a “ghost plume” contained within the mantle by the relatively thick rocky layer beneath Oman would suggest there are others, says Pilia. “We’re convinced that the Dani plume is not alone.” If there are many other hidden plumes, it could mean more heat from the core is flowing directly through the mantle via plumes, rather than through slower convection, says Goes. “It has implications, potentially, for the evolution of the Earth if we get a different estimate of how much heat comes out of the mantle.” Journal referenceEarth and Planetary Science Letters DOI: 10.1016/j.epsl.2025.119467 Topics:
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • MindsEye review – a dystopian future that plays like it’s from 2012

    There’s a Sphere-alike in Redrock, MindsEye’s open-world version of Las Vegas. It’s pretty much a straight copy of the original: a huge soap bubble, half sunk into the desert floor, with its surface turned into a gigantic TV. Occasionally you’ll pull up near the Sphere while driving an electric vehicle made by Silva, the megacorp that controls this world. You’ll sometimes come to a stop just as an advert for an identical Silva EV plays out on the huge curved screen overhead. The doubling effect can be slightly vertigo-inducing.At these moments, I truly get what MindsEye is trying to do. You’re stuck in the ultimate company town, where oligarchs and other crooks run everything, and there’s no hope of escaping the ecosystem they’ve built. MindsEye gets this all across through a chance encounter, and in a way that’s both light of touch and clever. The rest of the game tends towards the heavy-handed and silly, but it’s nice to glimpse a few instances where everything clicks.With its Spheres and omnipresent EVs, MindsEye looks and sounds like the future. It’s concerned with AI and tech bros and the insidious creep of a corporate dystopia. You play as an amnesiac former-soldier who must work out the precise damage that technology has done to his humanity, while shooting people and robots and drones. And alongside the campaign itself, MindsEye also has a suite of tools for making your own game or levels and publishing them for fellow players. All of this has come from a studio founded by Leslie Benzies, whose production credits include the likes of GTA 5.AI overlords … MindsEye. Photograph: IOI PartnersWhat’s weird, then, is that MindsEye generally plays like the past. Put a finger to the air and the wind is blowing from somewhere around 2012. At heart, this is a roughly hewn cover shooter with an open world that you only really experience when you’re driving between missions. Its topical concerns mainly exist to justify double-crosses and car chases and shootouts, and to explain why you head into battle with a personal drone that can open doors for you and stun nearby enemies.It can be an uncanny experience, drifting back through the years to a time when many third-person games still featured unskippable cut-scenes and cover that could be awkward to unstick yourself from. I should add that there are plenty of reports at the moment of crashes and technical glitches and characters turning up without their faces in place. Playing on a relatively old PC, aside from one crash and a few amusing bugs, I’ve been mostly fine. I’ve just been playing a game that feels equally elderly.This is sometimes less of a criticism than it sounds. There is a definite pleasure to be had in simple run-and-gun missions where you shoot very similar looking people over and over again and pick a path between waypoints. The shooting often feels good, and while it’s a bit of a swizz to have to drive to and from each mission, the cars have a nice fishtaily looseness to them that can, at times, invoke the Valium-tinged glory of the Driver games.Driving between missions … MindsEye. Photograph: Build A Rocket Boy/IOI PartnersAnd for a game that has thought a lot about the point at which AI takes over, the in-game AI around me wasn’t in danger of taking over anything. When I handed over control of my car to the game while tailing an enemy, having been told I should try not to be spotted, the game made sure our bumpers kissed at every intersection. The streets of this particular open world are filled with amusingly unskilled AI drivers. I’d frequently arrive at traffic lights to be greeted by a recent pile-up, so delighted by the off-screen collisions that had scattered road cones and Dumpsters across my path that I almost always stopped to investigate.I even enjoyed the plot’s hokeyness, which features lines such as: “Your DNA has been altered since we last met!” Has it, though? Even so, I became increasingly aware that clever people had spent a good chunk of their working lives making this game. I don’t think they intended to cast me as what is in essence a Deliveroo bullet courier for an off-brand Elon Musk. Or to drop me into an open world that feels thin not because it lacks mission icons and fishing mini-games, but because it’s devoid of convincing human detail.I suspect the problem may actually be a thematically resonant one: a reckless kind of ambition. When I dropped into the level editor I found a tool that’s astonishingly rich and complex, but which also requires a lot of time and effort if you want to make anything really special in it. This is for the mega-fans, surely, the point-one percent. It must have taken serious time to build, and to do all that alongside a campaignis the kind of endeavour that requires a real megacorp behind it.MindsEye is an oddity. For all its failings, I rarely disliked playing it, and yet it’s also difficult to sincerely recommend. Its ideas, its moment-to-moment action and narrative are so thinly conceived that it barely exists. And yet: I’m kind of happy that it does.

    MindsEye is out now; £54.99
    #mindseye #review #dystopian #future #that
    MindsEye review – a dystopian future that plays like it’s from 2012
    There’s a Sphere-alike in Redrock, MindsEye’s open-world version of Las Vegas. It’s pretty much a straight copy of the original: a huge soap bubble, half sunk into the desert floor, with its surface turned into a gigantic TV. Occasionally you’ll pull up near the Sphere while driving an electric vehicle made by Silva, the megacorp that controls this world. You’ll sometimes come to a stop just as an advert for an identical Silva EV plays out on the huge curved screen overhead. The doubling effect can be slightly vertigo-inducing.At these moments, I truly get what MindsEye is trying to do. You’re stuck in the ultimate company town, where oligarchs and other crooks run everything, and there’s no hope of escaping the ecosystem they’ve built. MindsEye gets this all across through a chance encounter, and in a way that’s both light of touch and clever. The rest of the game tends towards the heavy-handed and silly, but it’s nice to glimpse a few instances where everything clicks.With its Spheres and omnipresent EVs, MindsEye looks and sounds like the future. It’s concerned with AI and tech bros and the insidious creep of a corporate dystopia. You play as an amnesiac former-soldier who must work out the precise damage that technology has done to his humanity, while shooting people and robots and drones. And alongside the campaign itself, MindsEye also has a suite of tools for making your own game or levels and publishing them for fellow players. All of this has come from a studio founded by Leslie Benzies, whose production credits include the likes of GTA 5.AI overlords … MindsEye. Photograph: IOI PartnersWhat’s weird, then, is that MindsEye generally plays like the past. Put a finger to the air and the wind is blowing from somewhere around 2012. At heart, this is a roughly hewn cover shooter with an open world that you only really experience when you’re driving between missions. Its topical concerns mainly exist to justify double-crosses and car chases and shootouts, and to explain why you head into battle with a personal drone that can open doors for you and stun nearby enemies.It can be an uncanny experience, drifting back through the years to a time when many third-person games still featured unskippable cut-scenes and cover that could be awkward to unstick yourself from. I should add that there are plenty of reports at the moment of crashes and technical glitches and characters turning up without their faces in place. Playing on a relatively old PC, aside from one crash and a few amusing bugs, I’ve been mostly fine. I’ve just been playing a game that feels equally elderly.This is sometimes less of a criticism than it sounds. There is a definite pleasure to be had in simple run-and-gun missions where you shoot very similar looking people over and over again and pick a path between waypoints. The shooting often feels good, and while it’s a bit of a swizz to have to drive to and from each mission, the cars have a nice fishtaily looseness to them that can, at times, invoke the Valium-tinged glory of the Driver games.Driving between missions … MindsEye. Photograph: Build A Rocket Boy/IOI PartnersAnd for a game that has thought a lot about the point at which AI takes over, the in-game AI around me wasn’t in danger of taking over anything. When I handed over control of my car to the game while tailing an enemy, having been told I should try not to be spotted, the game made sure our bumpers kissed at every intersection. The streets of this particular open world are filled with amusingly unskilled AI drivers. I’d frequently arrive at traffic lights to be greeted by a recent pile-up, so delighted by the off-screen collisions that had scattered road cones and Dumpsters across my path that I almost always stopped to investigate.I even enjoyed the plot’s hokeyness, which features lines such as: “Your DNA has been altered since we last met!” Has it, though? Even so, I became increasingly aware that clever people had spent a good chunk of their working lives making this game. I don’t think they intended to cast me as what is in essence a Deliveroo bullet courier for an off-brand Elon Musk. Or to drop me into an open world that feels thin not because it lacks mission icons and fishing mini-games, but because it’s devoid of convincing human detail.I suspect the problem may actually be a thematically resonant one: a reckless kind of ambition. When I dropped into the level editor I found a tool that’s astonishingly rich and complex, but which also requires a lot of time and effort if you want to make anything really special in it. This is for the mega-fans, surely, the point-one percent. It must have taken serious time to build, and to do all that alongside a campaignis the kind of endeavour that requires a real megacorp behind it.MindsEye is an oddity. For all its failings, I rarely disliked playing it, and yet it’s also difficult to sincerely recommend. Its ideas, its moment-to-moment action and narrative are so thinly conceived that it barely exists. And yet: I’m kind of happy that it does. MindsEye is out now; £54.99 #mindseye #review #dystopian #future #that
    MindsEye review – a dystopian future that plays like it’s from 2012
    www.theguardian.com
    There’s a Sphere-alike in Redrock, MindsEye’s open-world version of Las Vegas. It’s pretty much a straight copy of the original: a huge soap bubble, half sunk into the desert floor, with its surface turned into a gigantic TV. Occasionally you’ll pull up near the Sphere while driving an electric vehicle made by Silva, the megacorp that controls this world. You’ll sometimes come to a stop just as an advert for an identical Silva EV plays out on the huge curved screen overhead. The doubling effect can be slightly vertigo-inducing.At these moments, I truly get what MindsEye is trying to do. You’re stuck in the ultimate company town, where oligarchs and other crooks run everything, and there’s no hope of escaping the ecosystem they’ve built. MindsEye gets this all across through a chance encounter, and in a way that’s both light of touch and clever. The rest of the game tends towards the heavy-handed and silly, but it’s nice to glimpse a few instances where everything clicks.With its Spheres and omnipresent EVs, MindsEye looks and sounds like the future. It’s concerned with AI and tech bros and the insidious creep of a corporate dystopia. You play as an amnesiac former-soldier who must work out the precise damage that technology has done to his humanity, while shooting people and robots and drones. And alongside the campaign itself, MindsEye also has a suite of tools for making your own game or levels and publishing them for fellow players. All of this has come from a studio founded by Leslie Benzies, whose production credits include the likes of GTA 5.AI overlords … MindsEye. Photograph: IOI PartnersWhat’s weird, then, is that MindsEye generally plays like the past. Put a finger to the air and the wind is blowing from somewhere around 2012. At heart, this is a roughly hewn cover shooter with an open world that you only really experience when you’re driving between missions. Its topical concerns mainly exist to justify double-crosses and car chases and shootouts, and to explain why you head into battle with a personal drone that can open doors for you and stun nearby enemies.It can be an uncanny experience, drifting back through the years to a time when many third-person games still featured unskippable cut-scenes and cover that could be awkward to unstick yourself from. I should add that there are plenty of reports at the moment of crashes and technical glitches and characters turning up without their faces in place. Playing on a relatively old PC, aside from one crash and a few amusing bugs, I’ve been mostly fine. I’ve just been playing a game that feels equally elderly.This is sometimes less of a criticism than it sounds. There is a definite pleasure to be had in simple run-and-gun missions where you shoot very similar looking people over and over again and pick a path between waypoints. The shooting often feels good, and while it’s a bit of a swizz to have to drive to and from each mission, the cars have a nice fishtaily looseness to them that can, at times, invoke the Valium-tinged glory of the Driver games. (The airborne craft are less fun because they have less character.)Driving between missions … MindsEye. Photograph: Build A Rocket Boy/IOI PartnersAnd for a game that has thought a lot about the point at which AI takes over, the in-game AI around me wasn’t in danger of taking over anything. When I handed over control of my car to the game while tailing an enemy, having been told I should try not to be spotted, the game made sure our bumpers kissed at every intersection. The streets of this particular open world are filled with amusingly unskilled AI drivers. I’d frequently arrive at traffic lights to be greeted by a recent pile-up, so delighted by the off-screen collisions that had scattered road cones and Dumpsters across my path that I almost always stopped to investigate.I even enjoyed the plot’s hokeyness, which features lines such as: “Your DNA has been altered since we last met!” Has it, though? Even so, I became increasingly aware that clever people had spent a good chunk of their working lives making this game. I don’t think they intended to cast me as what is in essence a Deliveroo bullet courier for an off-brand Elon Musk. Or to drop me into an open world that feels thin not because it lacks mission icons and fishing mini-games, but because it’s devoid of convincing human detail.I suspect the problem may actually be a thematically resonant one: a reckless kind of ambition. When I dropped into the level editor I found a tool that’s astonishingly rich and complex, but which also requires a lot of time and effort if you want to make anything really special in it. This is for the mega-fans, surely, the point-one percent. It must have taken serious time to build, and to do all that alongside a campaign (one that tries, at least, to vary things now and then with stealth, trailing and sniper sections) is the kind of endeavour that requires a real megacorp behind it.MindsEye is an oddity. For all its failings, I rarely disliked playing it, and yet it’s also difficult to sincerely recommend. Its ideas, its moment-to-moment action and narrative are so thinly conceived that it barely exists. And yet: I’m kind of happy that it does. MindsEye is out now; £54.99
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • How a US agriculture agency became key in the fight against bird flu

    A dangerous strain of bird flu is spreading in US livestockMediaMedium/Alamy
    Since Donald Trump assumed office in January, the leading US public health agency has pulled back preparations for a potential bird flu pandemic. But as it steps back, another government agency is stepping up.

    While the US Department of Health and Human Servicespreviously held regular briefings on its efforts to prevent a wider outbreak of a deadly bird flu virus called H5N1 in people, it largely stopped once Trump took office. It has also cancelled funding for a vaccine that would have targeted the virus. In contrast, the US Department of Agriculturehas escalated its fight against H5N1’s spread in poultry flocks and dairy herds, including by funding the development of livestock vaccines.
    This particular virus – a strain of avian influenza called H5N1 – poses a significant threat to humans, having killed about half of the roughly 1000 people worldwide who tested positive for it since 2003. While the pathogen spreads rapidly in birds, it is poorly adapted to infecting humans and isn’t known to transmit between people. But that could change if it acquires mutations that allow it to spread more easily among mammals – a risk that increases with each mammalian infection.
    The possibility of H5N1 evolving to become more dangerous to people has grown significantly since March 2024, when the virus jumped from migratory birds to dairy cows in Texas. More than 1,070 herds across 17 states have been affected since then.
    H5N1 also infects poultry, placing the virus in closer proximity to people. Since 2022, nearly 175 million domestic birds have been culled in the US due to H5N1, and almost all of the 71 people who have tested positive for it had direct contact with livestock.

    Get the most essential health and fitness news in your inbox every Saturday.

    Sign up to newsletter

    “We need to take this seriously because whenconstantly is spreading, it’s constantly spilling over into humans,” says Seema Lakdawala at Emory University in Georgia. The virus has already killed a person in the US and a child in Mexico this year.
    Still, cases have declined under Trump. The last recorded human case was in February, and the number of affected poultry flocks fell 95 per cent between then and June. Outbreaks in dairy herds have also stabilised.
    It isn’t clear what is behind the decline. Lakdawala believes it is partly due to a lull in bird migration, which reduces opportunities for the virus to spread from wild birds to livestock. It may also reflect efforts by the USDA to contain outbreaks on farms. In February, the USDA unveiled a billion plan for tackling H5N1, including strengthening farmers’ defences against the virus, such as through free biosecurity assessments. Of the 150 facilities that have undergone assessment, only one has experienced an H5N1 outbreak.
    Under Trump, the USDA also continued its National Milk Testing Strategy, which mandates farms provide raw milk samples for influenza testing. If a farm is positive for H5N1, it must allow the USDA to monitor livestock and implement measures to contain the virus. The USDA launched the programme in December and has since ramped up participation to 45 states.
    “The National Milk Testing Strategy is a fantastic system,” says Erin Sorrell at Johns Hopkins University in Maryland. Along with the USDA’s efforts to improve biosecurity measures on farms, milk testing is crucial for containing the outbreak, says Sorrell.

    But while the USDA has bolstered its efforts against H5N1, the HHS doesn’t appear to have followed suit. In fact, the recent drop in human cases may reflect decreased surveillance due to workforce cuts, says Sorrell. In April, the HHS laid off about 10,000 employees, including 90 per cent of staff at the National Institute for Occupational Safety and Health, an office that helps investigate H5N1 outbreaks in farm workers.
    “There is an old saying that if you don’t test for something, you can’t find it,” says Sorrell. Yet a spokesperson for the US Centers for Disease Control and Preventionsays its guidance and surveillance efforts have not changed. “State and local health departments continue to monitor for illness in persons exposed to sick animals,” they told New Scientist. “CDC remains committed to rapidly communicating information as needed about H5N1.”
    The USDA and HHS also diverge on vaccination. While the USDA has allocated million toward developing vaccines and other solutions for preventing H5N1’s spread in livestock, the HHS cancelled million in contracts for influenza vaccine development. The contracts – terminated on 28 May – were with the pharmaceutical company Moderna to develop vaccines targeting flu subtypes, including H5N1, that could cause future pandemics. The news came the same day Moderna reported nearly 98 per cent of the roughly 300 participants who received two doses of the H5 vaccine in a clinical trial had antibody levels believed to be protective against the virus.
    The US has about five million H5N1 vaccine doses stockpiled, but these are made using eggs and cultured cells, which take longer to produce than mRNA-based vaccines like Moderna’s. The Moderna vaccine would have modernised the stockpile and enabled the government to rapidly produce vaccines in the event of a pandemic, says Sorrell. “It seems like a very effective platform and would have positioned the US and others to be on good footing if and when we needed a vaccine for our general public,” she says.

    The HHS cancelled the contracts due to concerns about mRNA vaccines, which Robert F Kennedy Jr – the country’s highest-ranking public health official – has previously cast doubt on. “The reality is that mRNA technology remains under-tested, and we are not going to spend taxpayer dollars repeating the mistakes of the last administration,” said HHS communications director Andrew Nixon in a statement to New Scientist.
    However, mRNA technology isn’t new. It has been in development for more than half a century and numerous clinical trials have shown mRNA vaccines are safe. While they do carry the risk of side effects – the majority of which are mild – this is true of almost every medical treatment. In a press release, Moderna said it would explore alternative funding paths for the programme.
    “My stance is that we should not be looking to take anything off the table, and that includes any type of vaccine regimen,” says Lakdawala.
    “Vaccines are the most effective way to counter an infectious disease,” says Sorrell. “And so having that in your arsenal and ready to go just give you more options.”
    Topics:
    #how #agriculture #agency #became #key
    How a US agriculture agency became key in the fight against bird flu
    A dangerous strain of bird flu is spreading in US livestockMediaMedium/Alamy Since Donald Trump assumed office in January, the leading US public health agency has pulled back preparations for a potential bird flu pandemic. But as it steps back, another government agency is stepping up. While the US Department of Health and Human Servicespreviously held regular briefings on its efforts to prevent a wider outbreak of a deadly bird flu virus called H5N1 in people, it largely stopped once Trump took office. It has also cancelled funding for a vaccine that would have targeted the virus. In contrast, the US Department of Agriculturehas escalated its fight against H5N1’s spread in poultry flocks and dairy herds, including by funding the development of livestock vaccines. This particular virus – a strain of avian influenza called H5N1 – poses a significant threat to humans, having killed about half of the roughly 1000 people worldwide who tested positive for it since 2003. While the pathogen spreads rapidly in birds, it is poorly adapted to infecting humans and isn’t known to transmit between people. But that could change if it acquires mutations that allow it to spread more easily among mammals – a risk that increases with each mammalian infection. The possibility of H5N1 evolving to become more dangerous to people has grown significantly since March 2024, when the virus jumped from migratory birds to dairy cows in Texas. More than 1,070 herds across 17 states have been affected since then. H5N1 also infects poultry, placing the virus in closer proximity to people. Since 2022, nearly 175 million domestic birds have been culled in the US due to H5N1, and almost all of the 71 people who have tested positive for it had direct contact with livestock. Get the most essential health and fitness news in your inbox every Saturday. Sign up to newsletter “We need to take this seriously because whenconstantly is spreading, it’s constantly spilling over into humans,” says Seema Lakdawala at Emory University in Georgia. The virus has already killed a person in the US and a child in Mexico this year. Still, cases have declined under Trump. The last recorded human case was in February, and the number of affected poultry flocks fell 95 per cent between then and June. Outbreaks in dairy herds have also stabilised. It isn’t clear what is behind the decline. Lakdawala believes it is partly due to a lull in bird migration, which reduces opportunities for the virus to spread from wild birds to livestock. It may also reflect efforts by the USDA to contain outbreaks on farms. In February, the USDA unveiled a billion plan for tackling H5N1, including strengthening farmers’ defences against the virus, such as through free biosecurity assessments. Of the 150 facilities that have undergone assessment, only one has experienced an H5N1 outbreak. Under Trump, the USDA also continued its National Milk Testing Strategy, which mandates farms provide raw milk samples for influenza testing. If a farm is positive for H5N1, it must allow the USDA to monitor livestock and implement measures to contain the virus. The USDA launched the programme in December and has since ramped up participation to 45 states. “The National Milk Testing Strategy is a fantastic system,” says Erin Sorrell at Johns Hopkins University in Maryland. Along with the USDA’s efforts to improve biosecurity measures on farms, milk testing is crucial for containing the outbreak, says Sorrell. But while the USDA has bolstered its efforts against H5N1, the HHS doesn’t appear to have followed suit. In fact, the recent drop in human cases may reflect decreased surveillance due to workforce cuts, says Sorrell. In April, the HHS laid off about 10,000 employees, including 90 per cent of staff at the National Institute for Occupational Safety and Health, an office that helps investigate H5N1 outbreaks in farm workers. “There is an old saying that if you don’t test for something, you can’t find it,” says Sorrell. Yet a spokesperson for the US Centers for Disease Control and Preventionsays its guidance and surveillance efforts have not changed. “State and local health departments continue to monitor for illness in persons exposed to sick animals,” they told New Scientist. “CDC remains committed to rapidly communicating information as needed about H5N1.” The USDA and HHS also diverge on vaccination. While the USDA has allocated million toward developing vaccines and other solutions for preventing H5N1’s spread in livestock, the HHS cancelled million in contracts for influenza vaccine development. The contracts – terminated on 28 May – were with the pharmaceutical company Moderna to develop vaccines targeting flu subtypes, including H5N1, that could cause future pandemics. The news came the same day Moderna reported nearly 98 per cent of the roughly 300 participants who received two doses of the H5 vaccine in a clinical trial had antibody levels believed to be protective against the virus. The US has about five million H5N1 vaccine doses stockpiled, but these are made using eggs and cultured cells, which take longer to produce than mRNA-based vaccines like Moderna’s. The Moderna vaccine would have modernised the stockpile and enabled the government to rapidly produce vaccines in the event of a pandemic, says Sorrell. “It seems like a very effective platform and would have positioned the US and others to be on good footing if and when we needed a vaccine for our general public,” she says. The HHS cancelled the contracts due to concerns about mRNA vaccines, which Robert F Kennedy Jr – the country’s highest-ranking public health official – has previously cast doubt on. “The reality is that mRNA technology remains under-tested, and we are not going to spend taxpayer dollars repeating the mistakes of the last administration,” said HHS communications director Andrew Nixon in a statement to New Scientist. However, mRNA technology isn’t new. It has been in development for more than half a century and numerous clinical trials have shown mRNA vaccines are safe. While they do carry the risk of side effects – the majority of which are mild – this is true of almost every medical treatment. In a press release, Moderna said it would explore alternative funding paths for the programme. “My stance is that we should not be looking to take anything off the table, and that includes any type of vaccine regimen,” says Lakdawala. “Vaccines are the most effective way to counter an infectious disease,” says Sorrell. “And so having that in your arsenal and ready to go just give you more options.” Topics: #how #agriculture #agency #became #key
    How a US agriculture agency became key in the fight against bird flu
    www.newscientist.com
    A dangerous strain of bird flu is spreading in US livestockMediaMedium/Alamy Since Donald Trump assumed office in January, the leading US public health agency has pulled back preparations for a potential bird flu pandemic. But as it steps back, another government agency is stepping up. While the US Department of Health and Human Services (HHS) previously held regular briefings on its efforts to prevent a wider outbreak of a deadly bird flu virus called H5N1 in people, it largely stopped once Trump took office. It has also cancelled funding for a vaccine that would have targeted the virus. In contrast, the US Department of Agriculture (USDA) has escalated its fight against H5N1’s spread in poultry flocks and dairy herds, including by funding the development of livestock vaccines. This particular virus – a strain of avian influenza called H5N1 – poses a significant threat to humans, having killed about half of the roughly 1000 people worldwide who tested positive for it since 2003. While the pathogen spreads rapidly in birds, it is poorly adapted to infecting humans and isn’t known to transmit between people. But that could change if it acquires mutations that allow it to spread more easily among mammals – a risk that increases with each mammalian infection. The possibility of H5N1 evolving to become more dangerous to people has grown significantly since March 2024, when the virus jumped from migratory birds to dairy cows in Texas. More than 1,070 herds across 17 states have been affected since then. H5N1 also infects poultry, placing the virus in closer proximity to people. Since 2022, nearly 175 million domestic birds have been culled in the US due to H5N1, and almost all of the 71 people who have tested positive for it had direct contact with livestock. Get the most essential health and fitness news in your inbox every Saturday. Sign up to newsletter “We need to take this seriously because when [H5N1] constantly is spreading, it’s constantly spilling over into humans,” says Seema Lakdawala at Emory University in Georgia. The virus has already killed a person in the US and a child in Mexico this year. Still, cases have declined under Trump. The last recorded human case was in February, and the number of affected poultry flocks fell 95 per cent between then and June. Outbreaks in dairy herds have also stabilised. It isn’t clear what is behind the decline. Lakdawala believes it is partly due to a lull in bird migration, which reduces opportunities for the virus to spread from wild birds to livestock. It may also reflect efforts by the USDA to contain outbreaks on farms. In February, the USDA unveiled a $1 billion plan for tackling H5N1, including strengthening farmers’ defences against the virus, such as through free biosecurity assessments. Of the 150 facilities that have undergone assessment, only one has experienced an H5N1 outbreak. Under Trump, the USDA also continued its National Milk Testing Strategy, which mandates farms provide raw milk samples for influenza testing. If a farm is positive for H5N1, it must allow the USDA to monitor livestock and implement measures to contain the virus. The USDA launched the programme in December and has since ramped up participation to 45 states. “The National Milk Testing Strategy is a fantastic system,” says Erin Sorrell at Johns Hopkins University in Maryland. Along with the USDA’s efforts to improve biosecurity measures on farms, milk testing is crucial for containing the outbreak, says Sorrell. But while the USDA has bolstered its efforts against H5N1, the HHS doesn’t appear to have followed suit. In fact, the recent drop in human cases may reflect decreased surveillance due to workforce cuts, says Sorrell. In April, the HHS laid off about 10,000 employees, including 90 per cent of staff at the National Institute for Occupational Safety and Health, an office that helps investigate H5N1 outbreaks in farm workers. “There is an old saying that if you don’t test for something, you can’t find it,” says Sorrell. Yet a spokesperson for the US Centers for Disease Control and Prevention (CDC) says its guidance and surveillance efforts have not changed. “State and local health departments continue to monitor for illness in persons exposed to sick animals,” they told New Scientist. “CDC remains committed to rapidly communicating information as needed about H5N1.” The USDA and HHS also diverge on vaccination. While the USDA has allocated $100 million toward developing vaccines and other solutions for preventing H5N1’s spread in livestock, the HHS cancelled $776 million in contracts for influenza vaccine development. The contracts – terminated on 28 May – were with the pharmaceutical company Moderna to develop vaccines targeting flu subtypes, including H5N1, that could cause future pandemics. The news came the same day Moderna reported nearly 98 per cent of the roughly 300 participants who received two doses of the H5 vaccine in a clinical trial had antibody levels believed to be protective against the virus. The US has about five million H5N1 vaccine doses stockpiled, but these are made using eggs and cultured cells, which take longer to produce than mRNA-based vaccines like Moderna’s. The Moderna vaccine would have modernised the stockpile and enabled the government to rapidly produce vaccines in the event of a pandemic, says Sorrell. “It seems like a very effective platform and would have positioned the US and others to be on good footing if and when we needed a vaccine for our general public,” she says. The HHS cancelled the contracts due to concerns about mRNA vaccines, which Robert F Kennedy Jr – the country’s highest-ranking public health official – has previously cast doubt on. “The reality is that mRNA technology remains under-tested, and we are not going to spend taxpayer dollars repeating the mistakes of the last administration,” said HHS communications director Andrew Nixon in a statement to New Scientist. However, mRNA technology isn’t new. It has been in development for more than half a century and numerous clinical trials have shown mRNA vaccines are safe. While they do carry the risk of side effects – the majority of which are mild – this is true of almost every medical treatment. In a press release, Moderna said it would explore alternative funding paths for the programme. “My stance is that we should not be looking to take anything off the table, and that includes any type of vaccine regimen,” says Lakdawala. “Vaccines are the most effective way to counter an infectious disease,” says Sorrell. “And so having that in your arsenal and ready to go just give you more options.” Topics:
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • How jam jars explain Apple’s success

    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a categoryand the average customer review.Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #how #jam #jars #explain #apples
    How jam jars explain Apple’s success
    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a categoryand the average customer review.Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #how #jam #jars #explain #apples
    How jam jars explain Apple’s success
    uxdesign.cc
    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a category (choices) and the average customer review (satisfaction).Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
Αναζήτηση αποτελεσμάτων
CGShares https://cgshares.com