• Would you switch browsers for a chatbot?

    Hi, friends! Welcome to Installer No. 87, your guide to the best and Verge-iest stuff in the world.This week, I’ve been reading about Sabrina Carpenter and Khaby Lame and intimacy coordinators, finally making a dent in Barbarians at the Gate, watching all the Ben Schwartz and Friends I can find on YouTube, planning my days with the new Finalist beta, recklessly installing all the Apple developer betas after WWDC, thoroughly enjoying Dakota Johnson’s current press tour, and trying to clear all my inboxes before I go on parental leave. It’s… going.I also have for you a much-awaited new browser, a surprise update to a great photo editor, a neat trailer for a meh-looking movie, a classic Steve Jobs speech, and much more. Slightly shorter issue this week, sorry; there’s just a lot going on, but I didn’t want to leave y’all hanging entirely. Oh, and: we’ll be off next week, for Juneteenth, vacation, and general summer chaos reasons. We’ll be back in full force after that, though! Let’s get into it.The DropDia. I know there are a lot of Arc fans here in the Installerverse, and I know you, like me, will have a lot of feelings about the company’s new and extremely AI-focused browser. Personally, I don’t see leaving Arc anytime soon, but there are some really fascinating ideasin Dia already. Snapseed 3.0. I completely forgot Snapseed even existed, and now here’s a really nice update with a bunch of new editing tools and a nice new redesign! As straightforward photo editors go, this is one of the better ones. The new version is only on iOS right now, but I assume it’s heading to Android shortly.“I Tried To Make Something In America.” I was first turned onto the story of the Smarter Scrubber by a great Search Engine episode, and this is a great companion to the story about what it really takes to bring manufacturing back to the US. And why it’s hard to justify.. That link, and the trailer, will only do anything for you if you have a newer iPhone. But even if you don’t care about the movie, the trailer — which actually buzzes in sync with the car’s rumbles and revs — is just really, really cool. Android 16. You can’t get the cool, colorful new look just yet or the desktop mode I am extremely excited about — there’s a lot of good stuff in Android 16 but most of it is coming later. Still, Live Updates look good, and there’s some helpful accessibility stuff, as well.The Infinite Machine Olto. I am such a sucker for any kind of futuristic-looking electric scooter, and this one really hits the sweet spot. Part moped, part e-bike, all Blade Runner vibes. If it wasn’t then I would’ve probably ordered one already.The Fujifilm X-E5. I kept wondering why Fujifilm didn’t just make, like, a hundred different great-looking cameras at every imaginable price because everyone wants a camera this cool. Well, here we are! It’s a spin on the X100VI but with interchangeable lenses and a few power-user features. All my photographer friends are going to want this.Call Her Alex. I confess I’m no Call Her Daddy diehard, but I found this two-part doc on Alex Cooper really interesting. Cooper’s story is all about understanding people, the internet, and what it means to feel connected now. It’s all very low-stakes and somehow also existential? It’s only two parts, you should watch it.“Steve Jobs - 2005 Stanford Commencement Address.” For the 20th anniversary of Jobs’ famousspeech, the Steve Jobs Archive put together a big package of stories, notes, and other materials around the speech. Plus, a newly high-def version of the video. This one’s always worth the 15 minutes.Dune: Awakening. Dune has ascended to the rare territory of “I will check out anything from this franchise, ever, no questions asked.” This game is big on open-world survival and ornithopters, too, so it’s even more my kind of thing. And it’s apparently punishingly difficult in spots.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“I had tried the paper planner in the leather Paper Republic journal but since have moved onto the Remarkable Paper Pro color e-ink device which takes everything you like about paper but makes it editable and color coded. Combine this with a Remarkable planner in PDF format off of Etsy and you are golden.” — Jason“I started reading a manga series from content creator Cory Kenshin called Monsters We Make. So far, I love it. Already preordered Vol. 2.” — Rob“I recently went down the third party controller rabbit hole after my trusty adapted Xbox One controller finally kicked the bucket, and I wanted something I could use across my PC, phone, handheld, Switch, etc. I’ve been playing with the GameSir Cyclone 2 for a few weeks, and it feels really deluxe. The thumbsticks are impossibly smooth and accurate thanks to its TMR joysticks. The face buttons took a second for my brain to adjust to; the short travel distance initially registered as mushy, but once I stopped trying to pound the buttons like I was at the arcade, I found the subtle mechanical click super satisfying.” — Sam“The Apple TV Plus miniseries Long Way Home. It’s Ewan McGregor and Charley Boorman’s fourth Long Way series. This time they are touring some European countries on vintage bikes that they fixed, and it’s such a light-hearted show from two really down to earth humans. Connecting with other people in different cultures and seeing their journey is such a treat!” — Esmael“Podcast recommendation: Devil and the Deep Blue Sea by Christianity Today. A deep dive into the Satanic Panic of the 80’s and 90’s.” — Drew“Splatoon 3and the new How to Train Your Dragon.” — Aaron“I can’t put Mario Kart World down. When I get tired of the intense Knockout Tour mode I go to Free Roam and try to knock out P-Switch challenges, some of which are really tough! I’m obsessed.” — Dave“Fable, a cool app for finding books with virtual book clubs. It’s the closest to a more cozy online bookstore with more honest reviews. I just wish you could click on the author’s name to see their other books.” — Astrid“This is the Summer Games Fest weekand there are a TON of game demos to try out on Steam. One that has caught my attention / play time the most is Wildgate. It’s a team based spaceship shooter where ship crews battle and try to escape with a powerful artifact.” — Sean“Battlefront 2 is back for some reason. Still looks great.” — IanSigning offI have long been fascinated by weather forecasting. I recommend Andrew Blum’s book, The Weather Machine, to people all the time, as a way to understand both how we learned to predict the weather and why it’s a literally culture-changing thing to be able to do so. And if you want to make yourself so, so angry, there’s a whole chunk of Michael Lewis’s book, The Fifth Risk, about how a bunch of companies managed to basically privatize forecasts… based on government data. The weather is a huge business, an extremely powerful political force, and even more important to our way of life than we realize. And we’re really good at predicting the weather!I’ve also been hearing for years that weather forecasting is a perfect use for AI. It’s all about vast quantities of historical data, tiny fluctuations in readings, and finding patterns that often don’t want to be found. So, of course, as soon as I read my colleague Justine Calma’s story about a new Google project called Weather Lab, I spent the next hour poking through the data to see how well DeepMind managed to predict and track recent storms. It’s deeply wonky stuff, but it’s cool to see Big Tech trying to figure out Mother Nature — and almost getting it right. Almost.See you next week!See More:
    #would #you #switch #browsers #chatbot
    Would you switch browsers for a chatbot?
    Hi, friends! Welcome to Installer No. 87, your guide to the best and Verge-iest stuff in the world.This week, I’ve been reading about Sabrina Carpenter and Khaby Lame and intimacy coordinators, finally making a dent in Barbarians at the Gate, watching all the Ben Schwartz and Friends I can find on YouTube, planning my days with the new Finalist beta, recklessly installing all the Apple developer betas after WWDC, thoroughly enjoying Dakota Johnson’s current press tour, and trying to clear all my inboxes before I go on parental leave. It’s… going.I also have for you a much-awaited new browser, a surprise update to a great photo editor, a neat trailer for a meh-looking movie, a classic Steve Jobs speech, and much more. Slightly shorter issue this week, sorry; there’s just a lot going on, but I didn’t want to leave y’all hanging entirely. Oh, and: we’ll be off next week, for Juneteenth, vacation, and general summer chaos reasons. We’ll be back in full force after that, though! Let’s get into it.The DropDia. I know there are a lot of Arc fans here in the Installerverse, and I know you, like me, will have a lot of feelings about the company’s new and extremely AI-focused browser. Personally, I don’t see leaving Arc anytime soon, but there are some really fascinating ideasin Dia already. Snapseed 3.0. I completely forgot Snapseed even existed, and now here’s a really nice update with a bunch of new editing tools and a nice new redesign! As straightforward photo editors go, this is one of the better ones. The new version is only on iOS right now, but I assume it’s heading to Android shortly.“I Tried To Make Something In America.” I was first turned onto the story of the Smarter Scrubber by a great Search Engine episode, and this is a great companion to the story about what it really takes to bring manufacturing back to the US. And why it’s hard to justify.. That link, and the trailer, will only do anything for you if you have a newer iPhone. But even if you don’t care about the movie, the trailer — which actually buzzes in sync with the car’s rumbles and revs — is just really, really cool. Android 16. You can’t get the cool, colorful new look just yet or the desktop mode I am extremely excited about — there’s a lot of good stuff in Android 16 but most of it is coming later. Still, Live Updates look good, and there’s some helpful accessibility stuff, as well.The Infinite Machine Olto. I am such a sucker for any kind of futuristic-looking electric scooter, and this one really hits the sweet spot. Part moped, part e-bike, all Blade Runner vibes. If it wasn’t then I would’ve probably ordered one already.The Fujifilm X-E5. I kept wondering why Fujifilm didn’t just make, like, a hundred different great-looking cameras at every imaginable price because everyone wants a camera this cool. Well, here we are! It’s a spin on the X100VI but with interchangeable lenses and a few power-user features. All my photographer friends are going to want this.Call Her Alex. I confess I’m no Call Her Daddy diehard, but I found this two-part doc on Alex Cooper really interesting. Cooper’s story is all about understanding people, the internet, and what it means to feel connected now. It’s all very low-stakes and somehow also existential? It’s only two parts, you should watch it.“Steve Jobs - 2005 Stanford Commencement Address.” For the 20th anniversary of Jobs’ famousspeech, the Steve Jobs Archive put together a big package of stories, notes, and other materials around the speech. Plus, a newly high-def version of the video. This one’s always worth the 15 minutes.Dune: Awakening. Dune has ascended to the rare territory of “I will check out anything from this franchise, ever, no questions asked.” This game is big on open-world survival and ornithopters, too, so it’s even more my kind of thing. And it’s apparently punishingly difficult in spots.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“I had tried the paper planner in the leather Paper Republic journal but since have moved onto the Remarkable Paper Pro color e-ink device which takes everything you like about paper but makes it editable and color coded. Combine this with a Remarkable planner in PDF format off of Etsy and you are golden.” — Jason“I started reading a manga series from content creator Cory Kenshin called Monsters We Make. So far, I love it. Already preordered Vol. 2.” — Rob“I recently went down the third party controller rabbit hole after my trusty adapted Xbox One controller finally kicked the bucket, and I wanted something I could use across my PC, phone, handheld, Switch, etc. I’ve been playing with the GameSir Cyclone 2 for a few weeks, and it feels really deluxe. The thumbsticks are impossibly smooth and accurate thanks to its TMR joysticks. The face buttons took a second for my brain to adjust to; the short travel distance initially registered as mushy, but once I stopped trying to pound the buttons like I was at the arcade, I found the subtle mechanical click super satisfying.” — Sam“The Apple TV Plus miniseries Long Way Home. It’s Ewan McGregor and Charley Boorman’s fourth Long Way series. This time they are touring some European countries on vintage bikes that they fixed, and it’s such a light-hearted show from two really down to earth humans. Connecting with other people in different cultures and seeing their journey is such a treat!” — Esmael“Podcast recommendation: Devil and the Deep Blue Sea by Christianity Today. A deep dive into the Satanic Panic of the 80’s and 90’s.” — Drew“Splatoon 3and the new How to Train Your Dragon.” — Aaron“I can’t put Mario Kart World down. When I get tired of the intense Knockout Tour mode I go to Free Roam and try to knock out P-Switch challenges, some of which are really tough! I’m obsessed.” — Dave“Fable, a cool app for finding books with virtual book clubs. It’s the closest to a more cozy online bookstore with more honest reviews. I just wish you could click on the author’s name to see their other books.” — Astrid“This is the Summer Games Fest weekand there are a TON of game demos to try out on Steam. One that has caught my attention / play time the most is Wildgate. It’s a team based spaceship shooter where ship crews battle and try to escape with a powerful artifact.” — Sean“Battlefront 2 is back for some reason. Still looks great.” — IanSigning offI have long been fascinated by weather forecasting. I recommend Andrew Blum’s book, The Weather Machine, to people all the time, as a way to understand both how we learned to predict the weather and why it’s a literally culture-changing thing to be able to do so. And if you want to make yourself so, so angry, there’s a whole chunk of Michael Lewis’s book, The Fifth Risk, about how a bunch of companies managed to basically privatize forecasts… based on government data. The weather is a huge business, an extremely powerful political force, and even more important to our way of life than we realize. And we’re really good at predicting the weather!I’ve also been hearing for years that weather forecasting is a perfect use for AI. It’s all about vast quantities of historical data, tiny fluctuations in readings, and finding patterns that often don’t want to be found. So, of course, as soon as I read my colleague Justine Calma’s story about a new Google project called Weather Lab, I spent the next hour poking through the data to see how well DeepMind managed to predict and track recent storms. It’s deeply wonky stuff, but it’s cool to see Big Tech trying to figure out Mother Nature — and almost getting it right. Almost.See you next week!See More: #would #you #switch #browsers #chatbot
    WWW.THEVERGE.COM
    Would you switch browsers for a chatbot?
    Hi, friends! Welcome to Installer No. 87, your guide to the best and Verge-iest stuff in the world. (If you’re new here, welcome, happy It’s Officially Too Hot Now Week, and also you can read all the old editions at the Installer homepage.) This week, I’ve been reading about Sabrina Carpenter and Khaby Lame and intimacy coordinators, finally making a dent in Barbarians at the Gate, watching all the Ben Schwartz and Friends I can find on YouTube, planning my days with the new Finalist beta, recklessly installing all the Apple developer betas after WWDC, thoroughly enjoying Dakota Johnson’s current press tour, and trying to clear all my inboxes before I go on parental leave. It’s… going.I also have for you a much-awaited new browser, a surprise update to a great photo editor, a neat trailer for a meh-looking movie, a classic Steve Jobs speech, and much more. Slightly shorter issue this week, sorry; there’s just a lot going on, but I didn’t want to leave y’all hanging entirely. Oh, and: we’ll be off next week, for Juneteenth, vacation, and general summer chaos reasons. We’ll be back in full force after that, though! Let’s get into it.(As always, the best part of Installer is your ideas and tips. What do you want to know more about? What awesome tricks do you know that everyone else should? What app should everyone be using? Tell me everything: installer@theverge.com. And if you know someone else who might enjoy Installer, forward it to them and tell them to subscribe here.)The DropDia. I know there are a lot of Arc fans here in the Installerverse, and I know you, like me, will have a lot of feelings about the company’s new and extremely AI-focused browser. Personally, I don’t see leaving Arc anytime soon, but there are some really fascinating ideas (and nice design touches) in Dia already. Snapseed 3.0. I completely forgot Snapseed even existed, and now here’s a really nice update with a bunch of new editing tools and a nice new redesign! As straightforward photo editors go, this is one of the better ones. The new version is only on iOS right now, but I assume it’s heading to Android shortly.“I Tried To Make Something In America.” I was first turned onto the story of the Smarter Scrubber by a great Search Engine episode, and this is a great companion to the story about what it really takes to bring manufacturing back to the US. And why it’s hard to justify.. That link, and the trailer, will only do anything for you if you have a newer iPhone. But even if you don’t care about the movie, the trailer — which actually buzzes in sync with the car’s rumbles and revs — is just really, really cool. Android 16. You can’t get the cool, colorful new look just yet or the desktop mode I am extremely excited about — there’s a lot of good stuff in Android 16 but most of it is coming later. Still, Live Updates look good, and there’s some helpful accessibility stuff, as well.The Infinite Machine Olto. I am such a sucker for any kind of futuristic-looking electric scooter, and this one really hits the sweet spot. Part moped, part e-bike, all Blade Runner vibes. If it wasn’t $3,500, then I would’ve probably ordered one already.The Fujifilm X-E5. I kept wondering why Fujifilm didn’t just make, like, a hundred different great-looking cameras at every imaginable price because everyone wants a camera this cool. Well, here we are! It’s a spin on the X100VI but with interchangeable lenses and a few power-user features. All my photographer friends are going to want this.Call Her Alex. I confess I’m no Call Her Daddy diehard, but I found this two-part doc on Alex Cooper really interesting. Cooper’s story is all about understanding people, the internet, and what it means to feel connected now. It’s all very low-stakes and somehow also existential? It’s only two parts, you should watch it.“Steve Jobs - 2005 Stanford Commencement Address.” For the 20th anniversary of Jobs’ famous (and genuinely fabulous) speech, the Steve Jobs Archive put together a big package of stories, notes, and other materials around the speech. Plus, a newly high-def version of the video. This one’s always worth the 15 minutes.Dune: Awakening. Dune has ascended to the rare territory of “I will check out anything from this franchise, ever, no questions asked.” This game is big on open-world survival and ornithopters, too, so it’s even more my kind of thing. And it’s apparently punishingly difficult in spots.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“I had tried the paper planner in the leather Paper Republic journal but since have moved onto the Remarkable Paper Pro color e-ink device which takes everything you like about paper but makes it editable and color coded. Combine this with a Remarkable planner in PDF format off of Etsy and you are golden.” — Jason“I started reading a manga series from content creator Cory Kenshin called Monsters We Make. So far, I love it. Already preordered Vol. 2.” — Rob“I recently went down the third party controller rabbit hole after my trusty adapted Xbox One controller finally kicked the bucket, and I wanted something I could use across my PC, phone, handheld, Switch, etc. I’ve been playing with the GameSir Cyclone 2 for a few weeks, and it feels really deluxe. The thumbsticks are impossibly smooth and accurate thanks to its TMR joysticks. The face buttons took a second for my brain to adjust to; the short travel distance initially registered as mushy, but once I stopped trying to pound the buttons like I was at the arcade, I found the subtle mechanical click super satisfying.” — Sam“The Apple TV Plus miniseries Long Way Home. It’s Ewan McGregor and Charley Boorman’s fourth Long Way series. This time they are touring some European countries on vintage bikes that they fixed, and it’s such a light-hearted show from two really down to earth humans. Connecting with other people in different cultures and seeing their journey is such a treat!” — Esmael“Podcast recommendation: Devil and the Deep Blue Sea by Christianity Today. A deep dive into the Satanic Panic of the 80’s and 90’s.” — Drew“Splatoon 3 (the free Switch 2 update) and the new How to Train Your Dragon.” — Aaron“I can’t put Mario Kart World down. When I get tired of the intense Knockout Tour mode I go to Free Roam and try to knock out P-Switch challenges, some of which are really tough! I’m obsessed.” — Dave“Fable, a cool app for finding books with virtual book clubs. It’s the closest to a more cozy online bookstore with more honest reviews. I just wish you could click on the author’s name to see their other books.” — Astrid“This is the Summer Games Fest week (formerly E3, RIP) and there are a TON of game demos to try out on Steam. One that has caught my attention / play time the most is Wildgate. It’s a team based spaceship shooter where ship crews battle and try to escape with a powerful artifact.” — Sean“Battlefront 2 is back for some reason. Still looks great.” — IanSigning offI have long been fascinated by weather forecasting. I recommend Andrew Blum’s book, The Weather Machine, to people all the time, as a way to understand both how we learned to predict the weather and why it’s a literally culture-changing thing to be able to do so. And if you want to make yourself so, so angry, there’s a whole chunk of Michael Lewis’s book, The Fifth Risk, about how a bunch of companies managed to basically privatize forecasts… based on government data. The weather is a huge business, an extremely powerful political force, and even more important to our way of life than we realize. And we’re really good at predicting the weather!I’ve also been hearing for years that weather forecasting is a perfect use for AI. It’s all about vast quantities of historical data, tiny fluctuations in readings, and finding patterns that often don’t want to be found. So, of course, as soon as I read my colleague Justine Calma’s story about a new Google project called Weather Lab, I spent the next hour poking through the data to see how well DeepMind managed to predict and track recent storms. It’s deeply wonky stuff, but it’s cool to see Big Tech trying to figure out Mother Nature — and almost getting it right. Almost.See you next week!See More:
    Like
    Love
    Wow
    Angry
    Sad
    525
    0 Σχόλια 0 Μοιράστηκε
  • Games Inbox: Would Xbox ever shut down Game Pass?

    Game Pass – will it continue forever?The Monday letters page struggles to predict what’s going to happen with the PlayStation 6, as one reader sees their opinion of the Switch 2 change over time.
    To join in with the discussions yourself email gamecentral@metro.co.uk
    Final Pass
    I agree with a lot of what was said about the current state of Xbox in the Reader’s Feature this weekend and how the more Microsoft spends, and the more companies they own, the less the seem to be in control. Which is very strange really.The biggest recent failure has got to be Game Pass, which has not had the impact they expected and yet they don’t seem ready to acknowledge that. If they’re thinking of increasing the price again, like those rumours say, then I think that will be the point at which you can draw a line under the whole idea and admit it’s never going to catch on.
    But would Microsoft ever shut down Game Pass completely? I feel that would almost be more humiliating than stopping making consoles, so I can’t really imagine it. Instead, they’ll make it more and more expensive and put more and more restrictions on day one games until it’s no longer recognisable.Grackle
    Panic button
    Strange to see Sony talking relatively openly about Nintendo and Microsoft as competition. I can’t remember the last time they mentioned either of them, even if they obviously would prefer not to have, if they hadn’t been asked by investors.At no point did they acknowledge that the Switch has completely outsold both their last two consoles, so I’m not sure where their confidence comes from. I guess it’s from the fact that they know they’ve done nothing this gen and still come out on top, so from their perspective they’ve got plenty in reserve.

    Expert, exclusive gaming analysis

    Sign up to the GameCentral newsletter for a unique take on the week in gaming, alongside the latest reviews and more. Delivered to your inbox every Saturday morning.

    Having your panic button being ‘do anything at all’ must be pretty reassuring really. Nintendo has had to work to get where they are with the Switch but Sony is just coasting it.Lupus
    James’ LadderJacob’s Ladder is a film I’ve been meaning to watch for a while, and I guessed the ending quite early on, but it feels like a Silent Hill film. I don’t know if you guys have seen it but it’s an excellent film and the hospital scene near the end, and the cages blocking off the underground early on, just remind me of the game.
    A depressing film overall but worth a watch.Simon
    GC: Jacob’s Ladder was as a major influence on Silent Hill 2 in particular, even the jacket James is wearing is the same.
    Email your comments to: gamecentral@metro.co.uk
    Seeing the future
    I know everyone likes to think of themselves as Nostradamus, but I have to admit I have absolutely no clue what Sony is planning for the PlayStation 6. A new console that is just the usual update, that sits under your TV, is easy enough to imagine but surely they’re not going to do that again?But the idea of having new home and portable machines that come out at the same time seems so unlikely to me. Surely the portable wouldn’t be a separate format, but I can’t see it being any kind of portable that runs its own games because it’d never be as powerful as the home machine. So, it’s really just a PlayStation Portal 2?
    Like I said, I don’t know, but for some reason I have a bad feeling about that the next gen and whatever Sony does end up unveiling. I suspect that whatever they and Microsoft does it’s going to end up making the Switch 2seem even more appealing by comparison.Gonch
    Hidden insight
    I’m not going to say that Welcome Tour is a good game but what I will say is that I found it very interesting at times and I’m actually kind of surprised that Nintendo revealed some of the information that they did. Most of it could probably be found out by reverse engineering it and just taking it apart but I’m still surprised it went into as much detail as it did.You’re right that it’s all presented in a very dull way but personally I found the ‘Insights’ to be the best part of the game. The minigames really are not very good and I was always glad when they were over. So, while I would not necessarily recommend the gameI would say that it can be of interest to people who have an interest in how consoles work and how Nintendo think.Mogwai
    Purchase privilege
    I’ve recently had the privilege of buying Clair Obscur: Expedition 33 from the website CDKeys, using a 10% discount code. I was lucky enough to only spend a total of £25.99; much cheaper than purchasing the title for console. If only Ubisoft had the foresight to see what they allowed to slip through their fingers. I’d also like to mention that from what I’ve read quite recently ,and a couple of mixed views, I don’t see myself cancelling my Switch 2. On the contrary, it just is coming across as a disappointment.From the battery life to the lack of launch titles, an empty open world is never a smart choice to make not even Mario is safe from that. That leaves the upcoming ROG Xbox Ally that’s recently been showcased and is set for an October launch.
    I won’t lie it does look in the same vein as the Switch 2, far too similar to the ROG Ally X model. Just with grips and a dedicated Xbox button. The Z2 Extreme chip has me intrigued, however. How much of a transcendental shift it makes is another question however. I’ll have to wait to receive official confirmation for a price and release date. But there’s also a Lenovo Legion Go 2 waiting in the wings. I hope we hear more information soon. Preferably before my 28th in August.Shahzaib Sadiq
    Tip of the iceberg
    Interesting to hear about Cyberpunk 2077 running well on the Switch 2. I think if they’re getting that kind of performance at launch, from a third party not use to working with Nintendo hardware, that bodes very well for the future.I think we’re probably underestimating the Switch 2 a lot at the moment and stuff we’ll be seeing in two or three years is going to be amazing, I predict. What I can’t predict is when we’ll hear about any of this. I really hope there’s a Nintendo Direct this week.Dano
    Changing opinions
    So just a little over a week with the Switch 2 and after initially feeling incredibly meh about the new console and Mario Kart a little more playtime has been more optimistic about the console and much more positive about Mario Kart World.It did feel odd having a new console from Nintendo that didn’t inspire that childlike excitement. An iterative upgrade isn’t very exciting and as I own a Steam Deck the advancements in processing weren’t all that exciting either. I can imagine someone who only bough an OG Switch back in 2017 really noticing the improvements but if you bought an OLED it’s basically a Switch Pro.
    The criminally low level of software support doesn’t help. I double dipped Street Fighter 6 only to discover I can’t transfer progress or DLC across from my Xbox, which sort of means if I want both profiles to have parity I have to buy everything twice! I also treated myself to a new Pro Controller and find using it for Street Fighter almost unplayable as the L and ZL buttons are far too easy to accidently press when playing.
    Mario Kart initially felt like more of the same and it was only after I made an effort to explore the world map, unlock characters and karts, and try the new grinding/ollie mechanic that it clicked. I am now really enjoying it, especially the remixed soundtracks.
    I do however want more Switch 2 exclusive experiences – going back through my back catalogue for improved frame rates doesn’t cut it Nintendo! As someone with a large digital library the system transfer was very frustrating and the new virtual cartridges are just awful – does a Switch 2 need to be online all the time now? Not the best idea for a portable system.
    So, the start of a new console lifecycle and hopefully lots of new IP – I suspect Nintendo will try and get us to revisit our back catalogues first though.BristolPete
    Inbox also-rans
    Just thought I would mention that if anyone’s interested in purchasing the Mortal Kombat 1 Definitive Edition, which includes all DLC, that it’s currently an absolute steal on the Xbox store at £21.99.Nick The GreekI’ve just won my first Knockout Tour online race on Mario Kart World! I’ve got to say, the feeling is magnificent.Rable

    More Trending

    Email your comments to: gamecentral@metro.co.uk
    The small printNew Inbox updates appear every weekday morning, with special Hot Topic Inboxes at the weekend. Readers’ letters are used on merit and may be edited for length and content.
    You can also submit your own 500 to 600-word Reader’s Feature at any time via email or our Submit Stuff page, which if used will be shown in the next available weekend slot.
    You can also leave your comments below and don’t forget to follow us on Twitter.
    Arrow
    MORE: Games Inbox: Is Mario Kart World too hard?

    GameCentral
    Sign up for exclusive analysis, latest releases, and bonus community content.
    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    #games #inbox #would #xbox #ever
    Games Inbox: Would Xbox ever shut down Game Pass?
    Game Pass – will it continue forever?The Monday letters page struggles to predict what’s going to happen with the PlayStation 6, as one reader sees their opinion of the Switch 2 change over time. To join in with the discussions yourself email gamecentral@metro.co.uk Final Pass I agree with a lot of what was said about the current state of Xbox in the Reader’s Feature this weekend and how the more Microsoft spends, and the more companies they own, the less the seem to be in control. Which is very strange really.The biggest recent failure has got to be Game Pass, which has not had the impact they expected and yet they don’t seem ready to acknowledge that. If they’re thinking of increasing the price again, like those rumours say, then I think that will be the point at which you can draw a line under the whole idea and admit it’s never going to catch on. But would Microsoft ever shut down Game Pass completely? I feel that would almost be more humiliating than stopping making consoles, so I can’t really imagine it. Instead, they’ll make it more and more expensive and put more and more restrictions on day one games until it’s no longer recognisable.Grackle Panic button Strange to see Sony talking relatively openly about Nintendo and Microsoft as competition. I can’t remember the last time they mentioned either of them, even if they obviously would prefer not to have, if they hadn’t been asked by investors.At no point did they acknowledge that the Switch has completely outsold both their last two consoles, so I’m not sure where their confidence comes from. I guess it’s from the fact that they know they’ve done nothing this gen and still come out on top, so from their perspective they’ve got plenty in reserve. Expert, exclusive gaming analysis Sign up to the GameCentral newsletter for a unique take on the week in gaming, alongside the latest reviews and more. Delivered to your inbox every Saturday morning. Having your panic button being ‘do anything at all’ must be pretty reassuring really. Nintendo has had to work to get where they are with the Switch but Sony is just coasting it.Lupus James’ LadderJacob’s Ladder is a film I’ve been meaning to watch for a while, and I guessed the ending quite early on, but it feels like a Silent Hill film. I don’t know if you guys have seen it but it’s an excellent film and the hospital scene near the end, and the cages blocking off the underground early on, just remind me of the game. A depressing film overall but worth a watch.Simon GC: Jacob’s Ladder was as a major influence on Silent Hill 2 in particular, even the jacket James is wearing is the same. Email your comments to: gamecentral@metro.co.uk Seeing the future I know everyone likes to think of themselves as Nostradamus, but I have to admit I have absolutely no clue what Sony is planning for the PlayStation 6. A new console that is just the usual update, that sits under your TV, is easy enough to imagine but surely they’re not going to do that again?But the idea of having new home and portable machines that come out at the same time seems so unlikely to me. Surely the portable wouldn’t be a separate format, but I can’t see it being any kind of portable that runs its own games because it’d never be as powerful as the home machine. So, it’s really just a PlayStation Portal 2? Like I said, I don’t know, but for some reason I have a bad feeling about that the next gen and whatever Sony does end up unveiling. I suspect that whatever they and Microsoft does it’s going to end up making the Switch 2seem even more appealing by comparison.Gonch Hidden insight I’m not going to say that Welcome Tour is a good game but what I will say is that I found it very interesting at times and I’m actually kind of surprised that Nintendo revealed some of the information that they did. Most of it could probably be found out by reverse engineering it and just taking it apart but I’m still surprised it went into as much detail as it did.You’re right that it’s all presented in a very dull way but personally I found the ‘Insights’ to be the best part of the game. The minigames really are not very good and I was always glad when they were over. So, while I would not necessarily recommend the gameI would say that it can be of interest to people who have an interest in how consoles work and how Nintendo think.Mogwai Purchase privilege I’ve recently had the privilege of buying Clair Obscur: Expedition 33 from the website CDKeys, using a 10% discount code. I was lucky enough to only spend a total of £25.99; much cheaper than purchasing the title for console. If only Ubisoft had the foresight to see what they allowed to slip through their fingers. I’d also like to mention that from what I’ve read quite recently ,and a couple of mixed views, I don’t see myself cancelling my Switch 2. On the contrary, it just is coming across as a disappointment.From the battery life to the lack of launch titles, an empty open world is never a smart choice to make not even Mario is safe from that. That leaves the upcoming ROG Xbox Ally that’s recently been showcased and is set for an October launch. I won’t lie it does look in the same vein as the Switch 2, far too similar to the ROG Ally X model. Just with grips and a dedicated Xbox button. The Z2 Extreme chip has me intrigued, however. How much of a transcendental shift it makes is another question however. I’ll have to wait to receive official confirmation for a price and release date. But there’s also a Lenovo Legion Go 2 waiting in the wings. I hope we hear more information soon. Preferably before my 28th in August.Shahzaib Sadiq Tip of the iceberg Interesting to hear about Cyberpunk 2077 running well on the Switch 2. I think if they’re getting that kind of performance at launch, from a third party not use to working with Nintendo hardware, that bodes very well for the future.I think we’re probably underestimating the Switch 2 a lot at the moment and stuff we’ll be seeing in two or three years is going to be amazing, I predict. What I can’t predict is when we’ll hear about any of this. I really hope there’s a Nintendo Direct this week.Dano Changing opinions So just a little over a week with the Switch 2 and after initially feeling incredibly meh about the new console and Mario Kart a little more playtime has been more optimistic about the console and much more positive about Mario Kart World.It did feel odd having a new console from Nintendo that didn’t inspire that childlike excitement. An iterative upgrade isn’t very exciting and as I own a Steam Deck the advancements in processing weren’t all that exciting either. I can imagine someone who only bough an OG Switch back in 2017 really noticing the improvements but if you bought an OLED it’s basically a Switch Pro. The criminally low level of software support doesn’t help. I double dipped Street Fighter 6 only to discover I can’t transfer progress or DLC across from my Xbox, which sort of means if I want both profiles to have parity I have to buy everything twice! I also treated myself to a new Pro Controller and find using it for Street Fighter almost unplayable as the L and ZL buttons are far too easy to accidently press when playing. Mario Kart initially felt like more of the same and it was only after I made an effort to explore the world map, unlock characters and karts, and try the new grinding/ollie mechanic that it clicked. I am now really enjoying it, especially the remixed soundtracks. I do however want more Switch 2 exclusive experiences – going back through my back catalogue for improved frame rates doesn’t cut it Nintendo! As someone with a large digital library the system transfer was very frustrating and the new virtual cartridges are just awful – does a Switch 2 need to be online all the time now? Not the best idea for a portable system. So, the start of a new console lifecycle and hopefully lots of new IP – I suspect Nintendo will try and get us to revisit our back catalogues first though.BristolPete Inbox also-rans Just thought I would mention that if anyone’s interested in purchasing the Mortal Kombat 1 Definitive Edition, which includes all DLC, that it’s currently an absolute steal on the Xbox store at £21.99.Nick The GreekI’ve just won my first Knockout Tour online race on Mario Kart World! I’ve got to say, the feeling is magnificent.Rable More Trending Email your comments to: gamecentral@metro.co.uk The small printNew Inbox updates appear every weekday morning, with special Hot Topic Inboxes at the weekend. Readers’ letters are used on merit and may be edited for length and content. You can also submit your own 500 to 600-word Reader’s Feature at any time via email or our Submit Stuff page, which if used will be shown in the next available weekend slot. You can also leave your comments below and don’t forget to follow us on Twitter. Arrow MORE: Games Inbox: Is Mario Kart World too hard? GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy #games #inbox #would #xbox #ever
    METRO.CO.UK
    Games Inbox: Would Xbox ever shut down Game Pass?
    Game Pass – will it continue forever? (Microsoft) The Monday letters page struggles to predict what’s going to happen with the PlayStation 6, as one reader sees their opinion of the Switch 2 change over time. To join in with the discussions yourself email gamecentral@metro.co.uk Final Pass I agree with a lot of what was said about the current state of Xbox in the Reader’s Feature this weekend and how the more Microsoft spends, and the more companies they own, the less the seem to be in control. Which is very strange really.The biggest recent failure has got to be Game Pass, which has not had the impact they expected and yet they don’t seem ready to acknowledge that. If they’re thinking of increasing the price again, like those rumours say, then I think that will be the point at which you can draw a line under the whole idea and admit it’s never going to catch on. But would Microsoft ever shut down Game Pass completely? I feel that would almost be more humiliating than stopping making consoles, so I can’t really imagine it. Instead, they’ll make it more and more expensive and put more and more restrictions on day one games until it’s no longer recognisable.Grackle Panic button Strange to see Sony talking relatively openly about Nintendo and Microsoft as competition. I can’t remember the last time they mentioned either of them, even if they obviously would prefer not to have, if they hadn’t been asked by investors.At no point did they acknowledge that the Switch has completely outsold both their last two consoles, so I’m not sure where their confidence comes from. I guess it’s from the fact that they know they’ve done nothing this gen and still come out on top, so from their perspective they’ve got plenty in reserve. Expert, exclusive gaming analysis Sign up to the GameCentral newsletter for a unique take on the week in gaming, alongside the latest reviews and more. Delivered to your inbox every Saturday morning. Having your panic button being ‘do anything at all’ must be pretty reassuring really. Nintendo has had to work to get where they are with the Switch but Sony is just coasting it.Lupus James’ LadderJacob’s Ladder is a film I’ve been meaning to watch for a while, and I guessed the ending quite early on, but it feels like a Silent Hill film. I don’t know if you guys have seen it but it’s an excellent film and the hospital scene near the end, and the cages blocking off the underground early on, just remind me of the game. A depressing film overall but worth a watch.Simon GC: Jacob’s Ladder was as a major influence on Silent Hill 2 in particular, even the jacket James is wearing is the same. Email your comments to: gamecentral@metro.co.uk Seeing the future I know everyone likes to think of themselves as Nostradamus, but I have to admit I have absolutely no clue what Sony is planning for the PlayStation 6. A new console that is just the usual update, that sits under your TV, is easy enough to imagine but surely they’re not going to do that again?But the idea of having new home and portable machines that come out at the same time seems so unlikely to me. Surely the portable wouldn’t be a separate format, but I can’t see it being any kind of portable that runs its own games because it’d never be as powerful as the home machine. So, it’s really just a PlayStation Portal 2? Like I said, I don’t know, but for some reason I have a bad feeling about that the next gen and whatever Sony does end up unveiling. I suspect that whatever they and Microsoft does it’s going to end up making the Switch 2 (and PC) seem even more appealing by comparison.Gonch Hidden insight I’m not going to say that Welcome Tour is a good game but what I will say is that I found it very interesting at times and I’m actually kind of surprised that Nintendo revealed some of the information that they did. Most of it could probably be found out by reverse engineering it and just taking it apart but I’m still surprised it went into as much detail as it did.You’re right that it’s all presented in a very dull way but personally I found the ‘Insights’ to be the best part of the game. The minigames really are not very good and I was always glad when they were over. So, while I would not necessarily recommend the game (it’s not really a game) I would say that it can be of interest to people who have an interest in how consoles work and how Nintendo think.Mogwai Purchase privilege I’ve recently had the privilege of buying Clair Obscur: Expedition 33 from the website CDKeys, using a 10% discount code. I was lucky enough to only spend a total of £25.99; much cheaper than purchasing the title for console. If only Ubisoft had the foresight to see what they allowed to slip through their fingers. I’d also like to mention that from what I’ve read quite recently ,and a couple of mixed views, I don’t see myself cancelling my Switch 2. On the contrary, it just is coming across as a disappointment.From the battery life to the lack of launch titles, an empty open world is never a smart choice to make not even Mario is safe from that. That leaves the upcoming ROG Xbox Ally that’s recently been showcased and is set for an October launch. I won’t lie it does look in the same vein as the Switch 2, far too similar to the ROG Ally X model. Just with grips and a dedicated Xbox button. The Z2 Extreme chip has me intrigued, however. How much of a transcendental shift it makes is another question however. I’ll have to wait to receive official confirmation for a price and release date. But there’s also a Lenovo Legion Go 2 waiting in the wings. I hope we hear more information soon. Preferably before my 28th in August.Shahzaib Sadiq Tip of the iceberg Interesting to hear about Cyberpunk 2077 running well on the Switch 2. I think if they’re getting that kind of performance at launch, from a third party not use to working with Nintendo hardware, that bodes very well for the future.I think we’re probably underestimating the Switch 2 a lot at the moment and stuff we’ll be seeing in two or three years is going to be amazing, I predict. What I can’t predict is when we’ll hear about any of this. I really hope there’s a Nintendo Direct this week.Dano Changing opinions So just a little over a week with the Switch 2 and after initially feeling incredibly meh about the new console and Mario Kart a little more playtime has been more optimistic about the console and much more positive about Mario Kart World.It did feel odd having a new console from Nintendo that didn’t inspire that childlike excitement. An iterative upgrade isn’t very exciting and as I own a Steam Deck the advancements in processing weren’t all that exciting either. I can imagine someone who only bough an OG Switch back in 2017 really noticing the improvements but if you bought an OLED it’s basically a Switch Pro (minus the OLED). The criminally low level of software support doesn’t help. I double dipped Street Fighter 6 only to discover I can’t transfer progress or DLC across from my Xbox, which sort of means if I want both profiles to have parity I have to buy everything twice! I also treated myself to a new Pro Controller and find using it for Street Fighter almost unplayable as the L and ZL buttons are far too easy to accidently press when playing. Mario Kart initially felt like more of the same and it was only after I made an effort to explore the world map, unlock characters and karts, and try the new grinding/ollie mechanic that it clicked. I am now really enjoying it, especially the remixed soundtracks. I do however want more Switch 2 exclusive experiences – going back through my back catalogue for improved frame rates doesn’t cut it Nintendo! As someone with a large digital library the system transfer was very frustrating and the new virtual cartridges are just awful – does a Switch 2 need to be online all the time now? Not the best idea for a portable system. So, the start of a new console lifecycle and hopefully lots of new IP – I suspect Nintendo will try and get us to revisit our back catalogues first though.BristolPete Inbox also-rans Just thought I would mention that if anyone’s interested in purchasing the Mortal Kombat 1 Definitive Edition, which includes all DLC, that it’s currently an absolute steal on the Xbox store at £21.99.Nick The GreekI’ve just won my first Knockout Tour online race on Mario Kart World! I’ve got to say, the feeling is magnificent.Rable More Trending Email your comments to: gamecentral@metro.co.uk The small printNew Inbox updates appear every weekday morning, with special Hot Topic Inboxes at the weekend. Readers’ letters are used on merit and may be edited for length and content. You can also submit your own 500 to 600-word Reader’s Feature at any time via email or our Submit Stuff page, which if used will be shown in the next available weekend slot. You can also leave your comments below and don’t forget to follow us on Twitter. Arrow MORE: Games Inbox: Is Mario Kart World too hard? GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    Like
    Love
    Wow
    Angry
    Sad
    506
    2 Σχόλια 0 Μοιράστηκε
  • Why Designers Get Stuck In The Details And How To Stop

    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar?
    In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap.
    Reason #1 You’re Afraid To Show Rough Work
    We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed.
    I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them.
    The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief.
    The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem.
    So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychologyshows there are a couple of flavors driving this:

    Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den.
    Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off.

    Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback.
    Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift:
    Treat early sketches as disposable tools for thinking and actively share them to get feedback faster.

    Reason #2: You Fix The Symptom, Not The Cause
    Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data.
    From my experience, here are several reasons why users might not be clicking that coveted button:

    Users don’t understand that this step is for payment.
    They understand it’s about payment but expect order confirmation first.
    Due to incorrect translation, users don’t understand what the button means.
    Lack of trust signals.
    Unexpected additional coststhat appear at this stage.
    Technical issues.

    Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly.
    Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button.
    Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers— and understanding that using our product logic expertise proactively is crucial for modern designers.
    There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers.
    Reason #3: You’re Solving The Wrong Problem
    Before solving anything, ask whether the problem even deserves your attention.
    During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B testsshowed minimal impact, we continued to tweak those buttons.
    Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned:
    Without the right context, any visual tweak is lipstick on a pig.

    Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising.
    It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours.
    Reason #4: You’re Drowning In Unactionable Feedback
    We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow.
    What matters here are two things:

    The question you ask,
    The context you give.

    That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it.
    For instance:
    “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?”

    Here, you’ve stated the problem, shared your insight, explained your solution, and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?”
    Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside.
    I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory.
    So, to wrap up this point, here are two recommendations:

    Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”.
    Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it.

    Reason #5 You’re Just Tired
    Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing.
    A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the daycompared to late in the daysimply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity.
    What helps here:

    Swap tasks.Trade tickets with another designer; novelty resets your focus.
    Talk to another designer.If NDA permits, ask peers outside the team for a sanity check.
    Step away.Even a ten‑minute walk can do more than a double‑shot espresso.

    By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit.

    And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time.
    Four Steps I Use to Avoid Drowning In Detail
    Knowing these potential traps, here’s the practical process I use to stay on track:
    1. Define the Core Problem & Business Goal
    Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream.
    2. Choose the MechanicOnce the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels.
    3. Wireframe the Flow & Get Focused Feedback
    Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear contextto get actionable feedback, not just vague opinions.
    4. Polish the VisualsI only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution.
    Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering.
    Wrapping Up
    Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution.
    Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink.
    #why #designers #get #stuck #details
    Why Designers Get Stuck In The Details And How To Stop
    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar? In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap. Reason #1 You’re Afraid To Show Rough Work We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed. I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them. The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief. The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem. So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychologyshows there are a couple of flavors driving this: Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den. Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off. Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback. Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift: Treat early sketches as disposable tools for thinking and actively share them to get feedback faster. Reason #2: You Fix The Symptom, Not The Cause Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data. From my experience, here are several reasons why users might not be clicking that coveted button: Users don’t understand that this step is for payment. They understand it’s about payment but expect order confirmation first. Due to incorrect translation, users don’t understand what the button means. Lack of trust signals. Unexpected additional coststhat appear at this stage. Technical issues. Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly. Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button. Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers— and understanding that using our product logic expertise proactively is crucial for modern designers. There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers. Reason #3: You’re Solving The Wrong Problem Before solving anything, ask whether the problem even deserves your attention. During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B testsshowed minimal impact, we continued to tweak those buttons. Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned: Without the right context, any visual tweak is lipstick on a pig. Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising. It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours. Reason #4: You’re Drowning In Unactionable Feedback We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow. What matters here are two things: The question you ask, The context you give. That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it. For instance: “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?” Here, you’ve stated the problem, shared your insight, explained your solution, and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?” Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside. I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory. So, to wrap up this point, here are two recommendations: Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”. Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it. Reason #5 You’re Just Tired Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing. A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the daycompared to late in the daysimply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity. What helps here: Swap tasks.Trade tickets with another designer; novelty resets your focus. Talk to another designer.If NDA permits, ask peers outside the team for a sanity check. Step away.Even a ten‑minute walk can do more than a double‑shot espresso. By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit. And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time. Four Steps I Use to Avoid Drowning In Detail Knowing these potential traps, here’s the practical process I use to stay on track: 1. Define the Core Problem & Business Goal Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream. 2. Choose the MechanicOnce the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels. 3. Wireframe the Flow & Get Focused Feedback Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear contextto get actionable feedback, not just vague opinions. 4. Polish the VisualsI only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution. Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering. Wrapping Up Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution. Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink. #why #designers #get #stuck #details
    SMASHINGMAGAZINE.COM
    Why Designers Get Stuck In The Details And How To Stop
    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar? In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap. Reason #1 You’re Afraid To Show Rough Work We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed. I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them. The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief. The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem. So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychology (like the research by Hewitt and Flett) shows there are a couple of flavors driving this: Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den. Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off. Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback. Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift: Treat early sketches as disposable tools for thinking and actively share them to get feedback faster. Reason #2: You Fix The Symptom, Not The Cause Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data. From my experience, here are several reasons why users might not be clicking that coveted button: Users don’t understand that this step is for payment. They understand it’s about payment but expect order confirmation first. Due to incorrect translation, users don’t understand what the button means. Lack of trust signals (no security icons, unclear seller information). Unexpected additional costs (hidden fees, shipping) that appear at this stage. Technical issues (inactive button, page freezing). Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly. Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button. Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers (which might come from a fear of speaking up or a desire to avoid challenging authority) — and understanding that using our product logic expertise proactively is crucial for modern designers. There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers. Reason #3: You’re Solving The Wrong Problem Before solving anything, ask whether the problem even deserves your attention. During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B tests (a method of comparing two versions of a design to determine which performs better) showed minimal impact, we continued to tweak those buttons. Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned: Without the right context, any visual tweak is lipstick on a pig. Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising. It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours. Reason #4: You’re Drowning In Unactionable Feedback We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow. What matters here are two things: The question you ask, The context you give. That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it. For instance: “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?” Here, you’ve stated the problem (conversion drop), shared your insight (user confusion), explained your solution (cost breakdown), and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?” Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside. I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory. So, to wrap up this point, here are two recommendations: Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”. Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it. Reason #5 You’re Just Tired Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing. A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the day (about 70% of cases) compared to late in the day (less than 10%) simply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity. What helps here: Swap tasks.Trade tickets with another designer; novelty resets your focus. Talk to another designer.If NDA permits, ask peers outside the team for a sanity check. Step away.Even a ten‑minute walk can do more than a double‑shot espresso. By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit. And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time. Four Steps I Use to Avoid Drowning In Detail Knowing these potential traps, here’s the practical process I use to stay on track: 1. Define the Core Problem & Business Goal Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream. 2. Choose the Mechanic (Solution Principle) Once the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels. 3. Wireframe the Flow & Get Focused Feedback Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear context (as discussed in ‘Reason #4’) to get actionable feedback, not just vague opinions. 4. Polish the Visuals (Mindfully) I only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution. Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering. Wrapping Up Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution. Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink.
    Like
    Love
    Wow
    Angry
    Sad
    596
    0 Σχόλια 0 Μοιράστηκε
  • Inside Mark Zuckerberg’s AI hiring spree

    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    #inside #mark #zuckerbergs #hiring #spree
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More: #inside #mark #zuckerbergs #hiring #spree
    WWW.THEVERGE.COM
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch (amazingly, not all of them do), Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI (a deal Zuckerberg passed on). “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies (although that is highly unlikely to happen). Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s $14.3 billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will need (and want) to approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent $3 billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    0 Σχόλια 0 Μοιράστηκε
  • Making a killing: The playful 2D terror of Psycasso®

    A serial killer is stalking the streets, and his murders are a work of art. That’s more or less the premise behind Psycasso®, a tongue-in-cheek 2D pixel art game from Omni Digital Technologies that’s debuting a demo at Steam Next Fest this week, with plans to head into Early Access later this year. Playing as the killer, you get a job and build a life by day, then hunt the streets by night to find and torture victims, paint masterpieces with their blood, then sell them to fund operations.I sat down with lead developer Benjamin Lavender and Omni, designer and producer, to talk about this playfully gory game that gives a classic retro style and a freshtwist.Let’s start with a bit of background about the game.Omni: We wanted to make something that stands out. We know a lot of indie studios are releasing games and the market is ever growing, so we wanted to make something that’s not just fun to play, but catches people’s attention when others tell them about it. We’ve created an open-world pixel art game about an artist who spends his day getting a job, trying to fit into society. Then at nighttime, things take a more sinister turn and he goes around and makes artwork out of his victim's blood.We didn’t want to make it creepy and gory. We kind of wanted it to be cutesy and fun, just to make it ironic. Making it was a big challenge. We basically had to create an entire city with functioning shops and NPCs who have their own lives, their own hobbies. It was a huge challenge.So what does the actual gameplay look like?Omni: There’s a day cycle and a night cycle that breaks up the gameplay. During the day, you can get a job, level up skills, buy properties and furniture upgrades. At nighttime, the lighting completely changes, the vibe completely changes, there’s police on the street and the flow of the game shifts. The idea is that you can kidnap NPCs using a whole bunch of different weapons – guns, throwable grenades, little traps and cool stuff that you can capture people with.Once captured on the street, you can either harvest their blood and body parts there, or buy a specialist room to keep them in a cage and put them in various equipment like hanging chains or torture chairs. The player gets better rewards for harvesting blood and body parts this way.On the flip side, there’s a whole other element to the game where the player is given missions each week from galleries around the city. They come up on your phone menu, and you can accept them and do either portrait or landscape paintings, with all of the painting being done using only shades of red. We've got some nice drip effects and splat sounds to make it feel like you’re painting with blood. Then you can give your creation a name, submit it to a gallery, then it goes into a fake auction, people will bid on the artwork and you get paid and large amount of in-game money so you can then buy upgrades for the home, upgrade painting tools like bigger paint brushes, more selection tools, stuff like that.Ben: There’s definitely nothing like it. And that was the aim, is when you are telling people about it, they’re like, “Oh, okay. Right. We’re not going to forget about this.”

    Let’s dig into the 2D tools you used to create this world.Ben: It’s using the 2D Renderer. The Happy Harvest 2D sample project that you guys made was kind of a big starting point, from a lighting perspective, and doing the normal maps of the 2D and getting the lighting to look nice. Our night system is a very stripped-down, then added-on version of the thing that you guys made. I was particularly interested by its shadows. The building’s shadows aren’t actually shadows – it’s a black light. We tried to recreate that with all of our buildings in the entire open world – so it does look beautiful for a 2D game, if I do say so myself.Can you say a bit about how you’re using AI or procedural generation in NPCs?Ben: I don’t know how many actually made it into the demo to be fair, number-wise. Every single NPC has a unique identity, as in they all have a place of work that they go to on a regular schedule. They have hobbies, they have spots where they prefer to loiter, a park bench or whatever. So you can get to know everyone’s individual lifestyle.So, the old man that lives in the same building as me might love to go to the casino at nighttime or go consistently on a Monday and a Friday, that kind of vibe.It uses the A* Pathfinding Project, because we knew we wanted to have a lot of AIs. We’ve locked off most of the city for the demo, but the actual size of the city is huge. The police mechanics are currently turned off, but there’s 80% police mechanics in there as well. If you punch someone or hurt someone, that’s a crime, and if anyone sees it, they can go and report to the police and then things happen. That’s a feature that’s there but not demo-ready yet.How close would you say you are to a full release?Omni: We should be scheduled for October for early access. By that point we’ll have the stealth mechanics and the policing systems polished and in and get some of the other upcoming features buttoned up. We’re fairly close.Ben: Lots of it’s already done, it’s just turned off for the demo. We don’t want to overwhelm people because there’s just so much for the player to do.Tell me a bit about the paint mechanics – how did you build that?Ben: It is custom. We built it ourselves completely from scratch. But I can't take responsibility for that one – someone else did the whole thing – that was their baby. It is really, really cool though.Omni: It’s got a variety of masking tools, the ability to change opacity and spacing, you can undo, redo. It’s a really fantastic feature that gives people the opportunity to express themselves and make some great art.Ben: And it's gamified, so it doesn’t feel like you’ve just opened up Paint in Windows.Omni: Best of all is when you make a painting, it gets turned into an inventory item so you physically carry it around with you and can sell it or treasure it.What’s the most exciting part of Psycasso for you?Omni: Stunning graphics. I think graphically, it looks really pretty.Ben: Visually, you could look at it and go, “Oh, that’s Psycasso.”Omni: What we’ve done is taken a cozy retro-style game, and we’ve brought modern design, logic, and technology into it. So you're playing what feels like a nostalgic game, but you're getting the experience of a much newer project.Check out the Psycasso demo on Steam, and stay tuned for more NextFest coverage.
    #making #killing #playful #terror #psycasso
    Making a killing: The playful 2D terror of Psycasso®
    A serial killer is stalking the streets, and his murders are a work of art. That’s more or less the premise behind Psycasso®, a tongue-in-cheek 2D pixel art game from Omni Digital Technologies that’s debuting a demo at Steam Next Fest this week, with plans to head into Early Access later this year. Playing as the killer, you get a job and build a life by day, then hunt the streets by night to find and torture victims, paint masterpieces with their blood, then sell them to fund operations.I sat down with lead developer Benjamin Lavender and Omni, designer and producer, to talk about this playfully gory game that gives a classic retro style and a freshtwist.Let’s start with a bit of background about the game.Omni: We wanted to make something that stands out. We know a lot of indie studios are releasing games and the market is ever growing, so we wanted to make something that’s not just fun to play, but catches people’s attention when others tell them about it. We’ve created an open-world pixel art game about an artist who spends his day getting a job, trying to fit into society. Then at nighttime, things take a more sinister turn and he goes around and makes artwork out of his victim's blood.We didn’t want to make it creepy and gory. We kind of wanted it to be cutesy and fun, just to make it ironic. Making it was a big challenge. We basically had to create an entire city with functioning shops and NPCs who have their own lives, their own hobbies. It was a huge challenge.So what does the actual gameplay look like?Omni: There’s a day cycle and a night cycle that breaks up the gameplay. During the day, you can get a job, level up skills, buy properties and furniture upgrades. At nighttime, the lighting completely changes, the vibe completely changes, there’s police on the street and the flow of the game shifts. The idea is that you can kidnap NPCs using a whole bunch of different weapons – guns, throwable grenades, little traps and cool stuff that you can capture people with.Once captured on the street, you can either harvest their blood and body parts there, or buy a specialist room to keep them in a cage and put them in various equipment like hanging chains or torture chairs. The player gets better rewards for harvesting blood and body parts this way.On the flip side, there’s a whole other element to the game where the player is given missions each week from galleries around the city. They come up on your phone menu, and you can accept them and do either portrait or landscape paintings, with all of the painting being done using only shades of red. We've got some nice drip effects and splat sounds to make it feel like you’re painting with blood. Then you can give your creation a name, submit it to a gallery, then it goes into a fake auction, people will bid on the artwork and you get paid and large amount of in-game money so you can then buy upgrades for the home, upgrade painting tools like bigger paint brushes, more selection tools, stuff like that.Ben: There’s definitely nothing like it. And that was the aim, is when you are telling people about it, they’re like, “Oh, okay. Right. We’re not going to forget about this.” Let’s dig into the 2D tools you used to create this world.Ben: It’s using the 2D Renderer. The Happy Harvest 2D sample project that you guys made was kind of a big starting point, from a lighting perspective, and doing the normal maps of the 2D and getting the lighting to look nice. Our night system is a very stripped-down, then added-on version of the thing that you guys made. I was particularly interested by its shadows. The building’s shadows aren’t actually shadows – it’s a black light. We tried to recreate that with all of our buildings in the entire open world – so it does look beautiful for a 2D game, if I do say so myself.Can you say a bit about how you’re using AI or procedural generation in NPCs?Ben: I don’t know how many actually made it into the demo to be fair, number-wise. Every single NPC has a unique identity, as in they all have a place of work that they go to on a regular schedule. They have hobbies, they have spots where they prefer to loiter, a park bench or whatever. So you can get to know everyone’s individual lifestyle.So, the old man that lives in the same building as me might love to go to the casino at nighttime or go consistently on a Monday and a Friday, that kind of vibe.It uses the A* Pathfinding Project, because we knew we wanted to have a lot of AIs. We’ve locked off most of the city for the demo, but the actual size of the city is huge. The police mechanics are currently turned off, but there’s 80% police mechanics in there as well. If you punch someone or hurt someone, that’s a crime, and if anyone sees it, they can go and report to the police and then things happen. That’s a feature that’s there but not demo-ready yet.How close would you say you are to a full release?Omni: We should be scheduled for October for early access. By that point we’ll have the stealth mechanics and the policing systems polished and in and get some of the other upcoming features buttoned up. We’re fairly close.Ben: Lots of it’s already done, it’s just turned off for the demo. We don’t want to overwhelm people because there’s just so much for the player to do.Tell me a bit about the paint mechanics – how did you build that?Ben: It is custom. We built it ourselves completely from scratch. But I can't take responsibility for that one – someone else did the whole thing – that was their baby. It is really, really cool though.Omni: It’s got a variety of masking tools, the ability to change opacity and spacing, you can undo, redo. It’s a really fantastic feature that gives people the opportunity to express themselves and make some great art.Ben: And it's gamified, so it doesn’t feel like you’ve just opened up Paint in Windows.Omni: Best of all is when you make a painting, it gets turned into an inventory item so you physically carry it around with you and can sell it or treasure it.What’s the most exciting part of Psycasso for you?Omni: Stunning graphics. I think graphically, it looks really pretty.Ben: Visually, you could look at it and go, “Oh, that’s Psycasso.”Omni: What we’ve done is taken a cozy retro-style game, and we’ve brought modern design, logic, and technology into it. So you're playing what feels like a nostalgic game, but you're getting the experience of a much newer project.Check out the Psycasso demo on Steam, and stay tuned for more NextFest coverage. #making #killing #playful #terror #psycasso
    UNITY.COM
    Making a killing: The playful 2D terror of Psycasso®
    A serial killer is stalking the streets, and his murders are a work of art. That’s more or less the premise behind Psycasso®, a tongue-in-cheek 2D pixel art game from Omni Digital Technologies that’s debuting a demo at Steam Next Fest this week, with plans to head into Early Access later this year. Playing as the killer, you get a job and build a life by day, then hunt the streets by night to find and torture victims, paint masterpieces with their blood, then sell them to fund operations.I sat down with lead developer Benjamin Lavender and Omni, designer and producer, to talk about this playfully gory game that gives a classic retro style and a fresh (if gruesome) twist.Let’s start with a bit of background about the game.Omni: We wanted to make something that stands out. We know a lot of indie studios are releasing games and the market is ever growing, so we wanted to make something that’s not just fun to play, but catches people’s attention when others tell them about it. We’ve created an open-world pixel art game about an artist who spends his day getting a job, trying to fit into society. Then at nighttime, things take a more sinister turn and he goes around and makes artwork out of his victim's blood.We didn’t want to make it creepy and gory. We kind of wanted it to be cutesy and fun, just to make it ironic. Making it was a big challenge. We basically had to create an entire city with functioning shops and NPCs who have their own lives, their own hobbies. It was a huge challenge.So what does the actual gameplay look like?Omni: There’s a day cycle and a night cycle that breaks up the gameplay. During the day, you can get a job, level up skills, buy properties and furniture upgrades. At nighttime, the lighting completely changes, the vibe completely changes, there’s police on the street and the flow of the game shifts. The idea is that you can kidnap NPCs using a whole bunch of different weapons – guns, throwable grenades, little traps and cool stuff that you can capture people with.Once captured on the street, you can either harvest their blood and body parts there, or buy a specialist room to keep them in a cage and put them in various equipment like hanging chains or torture chairs. The player gets better rewards for harvesting blood and body parts this way.On the flip side, there’s a whole other element to the game where the player is given missions each week from galleries around the city. They come up on your phone menu, and you can accept them and do either portrait or landscape paintings, with all of the painting being done using only shades of red. We've got some nice drip effects and splat sounds to make it feel like you’re painting with blood. Then you can give your creation a name, submit it to a gallery, then it goes into a fake auction, people will bid on the artwork and you get paid and large amount of in-game money so you can then buy upgrades for the home, upgrade painting tools like bigger paint brushes, more selection tools, stuff like that.Ben: There’s definitely nothing like it. And that was the aim, is when you are telling people about it, they’re like, “Oh, okay. Right. We’re not going to forget about this.” Let’s dig into the 2D tools you used to create this world.Ben: It’s using the 2D Renderer. The Happy Harvest 2D sample project that you guys made was kind of a big starting point, from a lighting perspective, and doing the normal maps of the 2D and getting the lighting to look nice. Our night system is a very stripped-down, then added-on version of the thing that you guys made. I was particularly interested by its shadows. The building’s shadows aren’t actually shadows – it’s a black light. We tried to recreate that with all of our buildings in the entire open world – so it does look beautiful for a 2D game, if I do say so myself.Can you say a bit about how you’re using AI or procedural generation in NPCs?Ben: I don’t know how many actually made it into the demo to be fair, number-wise. Every single NPC has a unique identity, as in they all have a place of work that they go to on a regular schedule. They have hobbies, they have spots where they prefer to loiter, a park bench or whatever. So you can get to know everyone’s individual lifestyle.So, the old man that lives in the same building as me might love to go to the casino at nighttime or go consistently on a Monday and a Friday, that kind of vibe.It uses the A* Pathfinding Project, because we knew we wanted to have a lot of AIs. We’ve locked off most of the city for the demo, but the actual size of the city is huge. The police mechanics are currently turned off, but there’s 80% police mechanics in there as well. If you punch someone or hurt someone, that’s a crime, and if anyone sees it, they can go and report to the police and then things happen. That’s a feature that’s there but not demo-ready yet.How close would you say you are to a full release?Omni: We should be scheduled for October for early access. By that point we’ll have the stealth mechanics and the policing systems polished and in and get some of the other upcoming features buttoned up. We’re fairly close.Ben: Lots of it’s already done, it’s just turned off for the demo. We don’t want to overwhelm people because there’s just so much for the player to do.Tell me a bit about the paint mechanics – how did you build that?Ben: It is custom. We built it ourselves completely from scratch. But I can't take responsibility for that one – someone else did the whole thing – that was their baby. It is really, really cool though.Omni: It’s got a variety of masking tools, the ability to change opacity and spacing, you can undo, redo. It’s a really fantastic feature that gives people the opportunity to express themselves and make some great art.Ben: And it's gamified, so it doesn’t feel like you’ve just opened up Paint in Windows.Omni: Best of all is when you make a painting, it gets turned into an inventory item so you physically carry it around with you and can sell it or treasure it.What’s the most exciting part of Psycasso for you?Omni: Stunning graphics. I think graphically, it looks really pretty.Ben: Visually, you could look at it and go, “Oh, that’s Psycasso.”Omni: What we’ve done is taken a cozy retro-style game, and we’ve brought modern design, logic, and technology into it. So you're playing what feels like a nostalgic game, but you're getting the experience of a much newer project.Check out the Psycasso demo on Steam, and stay tuned for more NextFest coverage.
    0 Σχόλια 0 Μοιράστηκε
  • Confidential Killings [Free] [Adventure] [macOS]

    Set in the glitzy world of Hollywood in the late '70s, Confidential Killings have you investigate a series of gruesome murders that seem connected. There are rumours about a mysterious cult behind them... 
    Explore the crime scenes, use your detective skills to deduce what's going on!
    Wishlist on Steam!
    our discord:  informationDownloadDevelopment logDemo out! 10 days agoCommentsLog in with itch.io to leave a comment.I LOVE it! The art, the gameplay, the story, it's so much fun!ReplyFirst I was like "nah, so you just want to check if I have read everything, or what?" but later it made sense with the twists and hunting for the word you already know but need to find elsewhere.ReplyPicto Games21 hours agothe cursor is blinking it is very disturbing and the game very goodReplyBRANE15 hours agoI recommend trying the desktop builds if you'd like to play without this issue. Or putting more fire on this PR of Godot: day agoNice gameReplylovedovey6661 day agoI love this game! i like the detective games and this is perfect :3ReplyThis is a great game! The old style detective game ambientation is superb, and the art sublime. The misteries were pretty entertaining and interesting to keep you going as you think what truly happened!ReplyI had to take notes.... my memory aint great lol really enjoyed it ReplySebbog1 day agoThis game is kind of like the detective games the Case of the Golden Idol and its sequel, the Rise of the Golden Idol, from 2022 and 2024 respectively. It's not just bullshit. It has a coherent story. If you haven't heard of the Golden Idol games, then it's basically a game where you investigate mysterious deaths and fill in the blanks of the story. You can navigate from multiple different scenes and click on people and objects to gather important clues. I think it was a good game. I like that it's similar to the Golden Idol games. I also liked that you could see the exact amount of wrong slots when it's less than or equal to 2. It said either two incorrect or one incorrect. This isn't how it works in the Golden Idol games. Although, this might make the game too easy. I am not sure tho.
    I also streamed this game on YouTube: ReplyMV_Zarch3 days agoI’m so happy I found this game. Amazing! The mysteries are just so good and well done. The art is beautiful and really sets up the atmosphere well. I am really interested to see the full game.Replyreveoncelink5 days agoIt was amazing!! Perfect gameplay and so many clues to connect the dots. Amazing.ReplyHey, this is a great game except for the flickering of the cursor. It’s the same for your other games. Hope this gets fixed!ReplyBRANE6 days agoheya! For the flickering issue I'm not really sure what's the problem, but having a screen recording of it could help.Other than that we're not that focused on fixing the web build as it's going to be a PC game - so I suggest trying the Windows buildReplyReally fun! Wishing you guys lots of luck!ReplyThank you!Replykcouchpotato8 days agoThis game is so awesome!! I've wishlisted it on steam.ReplyBRANE8 days agoThank you!Reply
    #confidential #killings #free #adventure #macos
    Confidential Killings [Free] [Adventure] [macOS]
    Set in the glitzy world of Hollywood in the late '70s, Confidential Killings have you investigate a series of gruesome murders that seem connected. There are rumours about a mysterious cult behind them...  Explore the crime scenes, use your detective skills to deduce what's going on! Wishlist on Steam! our discord:  informationDownloadDevelopment logDemo out! 10 days agoCommentsLog in with itch.io to leave a comment.I LOVE it! The art, the gameplay, the story, it's so much fun!ReplyFirst I was like "nah, so you just want to check if I have read everything, or what?" but later it made sense with the twists and hunting for the word you already know but need to find elsewhere.ReplyPicto Games21 hours agothe cursor is blinking it is very disturbing and the game very goodReplyBRANE15 hours agoI recommend trying the desktop builds if you'd like to play without this issue. Or putting more fire on this PR of Godot: day agoNice gameReplylovedovey6661 day agoI love this game! i like the detective games and this is perfect :3ReplyThis is a great game! The old style detective game ambientation is superb, and the art sublime. The misteries were pretty entertaining and interesting to keep you going as you think what truly happened!ReplyI had to take notes.... my memory aint great lol really enjoyed it ReplySebbog1 day agoThis game is kind of like the detective games the Case of the Golden Idol and its sequel, the Rise of the Golden Idol, from 2022 and 2024 respectively. It's not just bullshit. It has a coherent story. If you haven't heard of the Golden Idol games, then it's basically a game where you investigate mysterious deaths and fill in the blanks of the story. You can navigate from multiple different scenes and click on people and objects to gather important clues. I think it was a good game. I like that it's similar to the Golden Idol games. I also liked that you could see the exact amount of wrong slots when it's less than or equal to 2. It said either two incorrect or one incorrect. This isn't how it works in the Golden Idol games. Although, this might make the game too easy. I am not sure tho. I also streamed this game on YouTube: ReplyMV_Zarch3 days agoI’m so happy I found this game. Amazing! The mysteries are just so good and well done. The art is beautiful and really sets up the atmosphere well. I am really interested to see the full game.Replyreveoncelink5 days agoIt was amazing!! Perfect gameplay and so many clues to connect the dots. Amazing.ReplyHey, this is a great game except for the flickering of the cursor. It’s the same for your other games. Hope this gets fixed!ReplyBRANE6 days agoheya! For the flickering issue I'm not really sure what's the problem, but having a screen recording of it could help.Other than that we're not that focused on fixing the web build as it's going to be a PC game - so I suggest trying the Windows buildReplyReally fun! Wishing you guys lots of luck!ReplyThank you!Replykcouchpotato8 days agoThis game is so awesome!! I've wishlisted it on steam.ReplyBRANE8 days agoThank you!Reply #confidential #killings #free #adventure #macos
    BRANEGAMES.ITCH.IO
    Confidential Killings [Free] [Adventure] [macOS]
    Set in the glitzy world of Hollywood in the late '70s, Confidential Killings have you investigate a series of gruesome murders that seem connected. There are rumours about a mysterious cult behind them...  Explore the crime scenes, use your detective skills to deduce what's going on! Wishlist on Steam! https://store.steampowered.com/app/2797960/Confidential_KillingsJoin our discord: https://discord.gg/xwFXgbb2xfMore informationDownloadDevelopment logDemo out! 10 days agoCommentsLog in with itch.io to leave a comment.I LOVE it! The art, the gameplay, the story, it's so much fun!ReplyFirst I was like "nah, so you just want to check if I have read everything, or what?" but later it made sense with the twists and hunting for the word you already know but need to find elsewhere.ReplyPicto Games21 hours ago(+2)the cursor is blinking it is very disturbing and the game very goodReplyBRANE15 hours ago (1 edit) (+1)I recommend trying the desktop builds if you'd like to play without this issue. Or putting more fire on this PR of Godot:https://github.com/godotengine/godot/pull/103304ReplybeautifulDegen1 day ago(+1)Nice gameReplylovedovey6661 day ago(+1)I love this game! i like the detective games and this is perfect :3ReplyThis is a great game! The old style detective game ambientation is superb, and the art sublime. The misteries were pretty entertaining and interesting to keep you going as you think what truly happened!ReplyI had to take notes.... my memory aint great lol really enjoyed it ReplySebbog1 day agoThis game is kind of like the detective games the Case of the Golden Idol and its sequel, the Rise of the Golden Idol, from 2022 and 2024 respectively. It's not just bullshit. It has a coherent story. If you haven't heard of the Golden Idol games, then it's basically a game where you investigate mysterious deaths and fill in the blanks of the story. You can navigate from multiple different scenes and click on people and objects to gather important clues. I think it was a good game. I like that it's similar to the Golden Idol games. I also liked that you could see the exact amount of wrong slots when it's less than or equal to 2. It said either two incorrect or one incorrect. This isn't how it works in the Golden Idol games. Although, this might make the game too easy. I am not sure tho. I also streamed this game on YouTube: ReplyMV_Zarch3 days agoI’m so happy I found this game. Amazing! The mysteries are just so good and well done. The art is beautiful and really sets up the atmosphere well. I am really interested to see the full game.Replyreveoncelink5 days agoIt was amazing!! Perfect gameplay and so many clues to connect the dots. Amazing.ReplyHey, this is a great game except for the flickering of the cursor. It’s the same for your other games (We Suspect Foul Play afaik). Hope this gets fixed! (I’m on chrome) ReplyBRANE6 days agoheya! For the flickering issue I'm not really sure what's the problem, but having a screen recording of it could help.Other than that we're not that focused on fixing the web build as it's going to be a PC game - so I suggest trying the Windows buildReplyReally fun! Wishing you guys lots of luck!ReplyThank you!Replykcouchpotato8 days ago(+1)This game is so awesome!! I've wishlisted it on steam.ReplyBRANE8 days agoThank you!Reply
    0 Σχόλια 0 Μοιράστηκε
  • Ants Do Poop and They Even Use Toilets to Fertilize Their Own Gardens

    Key Takeaways on Ant PoopDo ants poop? Yes. Any creature that eats will poop and ants are no exception. Because ants live in close quarters, they need to protect the colony from their feces so bacteria and fungus doesn't infect their health. This is why they use toilet chambers. Whether they isolate it in a toilet chamber or kick it to the curb, ants don’t keep their waste around. But some ants find a use for that stuff. One such species is the leafcutter ant that takes little clippings of leaves and uses these leaves to grow a very particular fungus that they then eat.Like urban humans, ants live in close quarters. Ant colonies can be home to thousands, even tens of thousands of individuals, depending on the species. And like any creature that eats, ants poop. When you combine close quarters and loads of feces, you have a recipe for disease, says Jessica Ware, curator and division chair of Invertebrate Zoology at the American Museum of Natural History. “Ant poop can harbor bacteria, and because it contains partly undigested food, it can grow bacteria and fungus that could threaten the health of the colony,” Ware says. But ant colonies aren’t seething beds of disease. That’s because ants are scrupulous about hygiene.Ants Do Poop and Ant Toilets Are RealAnt colony underground with ant chambers.To keep themselves and their nests clean, ants have evolved some interesting housekeeping strategies. Some types of ants actually have toilets — or at least something we might call toilets. Their nests are very complicated, with lots of different tunnels and chambers, explains Ware, and one of those chambers is a toilet chamber. Ants don’t visit the toilet when they feel the call of nature. Instead, worker ants who are on latrine duty collect the poop and carry it to the toilet chamber, which is located far away from other parts of the nest. What Does Ant Poop Look Like? This isn’t as messy a chore as it sounds. Like most insects, ants are water-limited, says Ware, so they try to get as much liquid out of their food as possible. This results in small, hard, usually black or brownish pellets of poop. The poop is dry and hard enough so that for ant species that don’t have indoor toilet chambers, the workers can just kick the poop out of the nest.Ants Use Poop as FertilizerWhether they isolate it in a toilet chamber or kick it to the curb, ants don’t keep their waste around. Well, at least most types of ants don’t. Some ants find a use for that stuff. One such species is the leafcutter ant. “They basically take little clippings of leaves and use these leaves to grow a very particular fungus that they then eat,” says Ware. “They don't eat the leaves, they eat the fungus.” And yep, they use their poop to fertilize their crops. “They’re basically gardeners,” Ware says. If you’d like to see leafcutter ants at work in their gardens and you happen to be in the New York City area, drop by the American Museum of Natural History. They have a large colony of fungus-gardening ants on display.Other Insects That Use ToiletsAnts may have toilets, but termites have even wilder ways of dealing with their wastes. Termites and ants might seem similar at first sight, but they aren’t closely related. Ants are more closely related to bees, while termites are more closely related to cockroaches, explains Aram Mikaelyan, an entomologist at North Carolina State University who studies the co-evolution of insects and their gut microbiomes. So ants’ and termites’ styles of social living evolved independently, and their solutions to the waste problem are quite different.“Termites have found a way to not distance themselves from the feces,” says Mikaelyan. “Instead, they use the feces itself as building material.” They’re able to do this because they feed on wood, Mikaelyan explains. When wood passes through the termites’ digestive systems into the poop, it enables a type of bacteria called Actinobacteria. These bacteria are the source of many antibiotics that humans use.So that unusual building material acts as a disinfectant. Mikaelyan describes it as “a living disinfectant wall, like a Clorox wall, almost.”Insect HygieneIt may seem surprising that ants and termites are so tidy and concerned with hygiene, but it’s really not uncommon. “Insects in general are cleaner than we think,” says Ware. “We often think of insects as being really gross, but most insects don’t want to lie in their own filth.”Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:The American Society of Microbiology. The Leaf-cutter Ant’s 50 Million Years of FarmingAvery Hurt is a freelance science journalist. In addition to writing for Discover, she writes regularly for a variety of outlets, both print and online, including National Geographic, Science News Explores, Medscape, and WebMD. She’s the author of Bullet With Your Name on It: What You Will Probably Die From and What You Can Do About It, Clerisy Press 2007, as well as several books for young readers. Avery got her start in journalism while attending university, writing for the school newspaper and editing the student non-fiction magazine. Though she writes about all areas of science, she is particularly interested in neuroscience, the science of consciousness, and AI–interests she developed while earning a degree in philosophy.
    #ants #poop #they #even #use
    Ants Do Poop and They Even Use Toilets to Fertilize Their Own Gardens
    Key Takeaways on Ant PoopDo ants poop? Yes. Any creature that eats will poop and ants are no exception. Because ants live in close quarters, they need to protect the colony from their feces so bacteria and fungus doesn't infect their health. This is why they use toilet chambers. Whether they isolate it in a toilet chamber or kick it to the curb, ants don’t keep their waste around. But some ants find a use for that stuff. One such species is the leafcutter ant that takes little clippings of leaves and uses these leaves to grow a very particular fungus that they then eat.Like urban humans, ants live in close quarters. Ant colonies can be home to thousands, even tens of thousands of individuals, depending on the species. And like any creature that eats, ants poop. When you combine close quarters and loads of feces, you have a recipe for disease, says Jessica Ware, curator and division chair of Invertebrate Zoology at the American Museum of Natural History. “Ant poop can harbor bacteria, and because it contains partly undigested food, it can grow bacteria and fungus that could threaten the health of the colony,” Ware says. But ant colonies aren’t seething beds of disease. That’s because ants are scrupulous about hygiene.Ants Do Poop and Ant Toilets Are RealAnt colony underground with ant chambers.To keep themselves and their nests clean, ants have evolved some interesting housekeeping strategies. Some types of ants actually have toilets — or at least something we might call toilets. Their nests are very complicated, with lots of different tunnels and chambers, explains Ware, and one of those chambers is a toilet chamber. Ants don’t visit the toilet when they feel the call of nature. Instead, worker ants who are on latrine duty collect the poop and carry it to the toilet chamber, which is located far away from other parts of the nest. What Does Ant Poop Look Like? This isn’t as messy a chore as it sounds. Like most insects, ants are water-limited, says Ware, so they try to get as much liquid out of their food as possible. This results in small, hard, usually black or brownish pellets of poop. The poop is dry and hard enough so that for ant species that don’t have indoor toilet chambers, the workers can just kick the poop out of the nest.Ants Use Poop as FertilizerWhether they isolate it in a toilet chamber or kick it to the curb, ants don’t keep their waste around. Well, at least most types of ants don’t. Some ants find a use for that stuff. One such species is the leafcutter ant. “They basically take little clippings of leaves and use these leaves to grow a very particular fungus that they then eat,” says Ware. “They don't eat the leaves, they eat the fungus.” And yep, they use their poop to fertilize their crops. “They’re basically gardeners,” Ware says. If you’d like to see leafcutter ants at work in their gardens and you happen to be in the New York City area, drop by the American Museum of Natural History. They have a large colony of fungus-gardening ants on display.Other Insects That Use ToiletsAnts may have toilets, but termites have even wilder ways of dealing with their wastes. Termites and ants might seem similar at first sight, but they aren’t closely related. Ants are more closely related to bees, while termites are more closely related to cockroaches, explains Aram Mikaelyan, an entomologist at North Carolina State University who studies the co-evolution of insects and their gut microbiomes. So ants’ and termites’ styles of social living evolved independently, and their solutions to the waste problem are quite different.“Termites have found a way to not distance themselves from the feces,” says Mikaelyan. “Instead, they use the feces itself as building material.” They’re able to do this because they feed on wood, Mikaelyan explains. When wood passes through the termites’ digestive systems into the poop, it enables a type of bacteria called Actinobacteria. These bacteria are the source of many antibiotics that humans use.So that unusual building material acts as a disinfectant. Mikaelyan describes it as “a living disinfectant wall, like a Clorox wall, almost.”Insect HygieneIt may seem surprising that ants and termites are so tidy and concerned with hygiene, but it’s really not uncommon. “Insects in general are cleaner than we think,” says Ware. “We often think of insects as being really gross, but most insects don’t want to lie in their own filth.”Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:The American Society of Microbiology. The Leaf-cutter Ant’s 50 Million Years of FarmingAvery Hurt is a freelance science journalist. In addition to writing for Discover, she writes regularly for a variety of outlets, both print and online, including National Geographic, Science News Explores, Medscape, and WebMD. She’s the author of Bullet With Your Name on It: What You Will Probably Die From and What You Can Do About It, Clerisy Press 2007, as well as several books for young readers. Avery got her start in journalism while attending university, writing for the school newspaper and editing the student non-fiction magazine. Though she writes about all areas of science, she is particularly interested in neuroscience, the science of consciousness, and AI–interests she developed while earning a degree in philosophy. #ants #poop #they #even #use
    WWW.DISCOVERMAGAZINE.COM
    Ants Do Poop and They Even Use Toilets to Fertilize Their Own Gardens
    Key Takeaways on Ant PoopDo ants poop? Yes. Any creature that eats will poop and ants are no exception. Because ants live in close quarters, they need to protect the colony from their feces so bacteria and fungus doesn't infect their health. This is why they use toilet chambers. Whether they isolate it in a toilet chamber or kick it to the curb, ants don’t keep their waste around. But some ants find a use for that stuff. One such species is the leafcutter ant that takes little clippings of leaves and uses these leaves to grow a very particular fungus that they then eat.Like urban humans, ants live in close quarters. Ant colonies can be home to thousands, even tens of thousands of individuals, depending on the species. And like any creature that eats, ants poop. When you combine close quarters and loads of feces, you have a recipe for disease, says Jessica Ware, curator and division chair of Invertebrate Zoology at the American Museum of Natural History. “Ant poop can harbor bacteria, and because it contains partly undigested food, it can grow bacteria and fungus that could threaten the health of the colony,” Ware says. But ant colonies aren’t seething beds of disease. That’s because ants are scrupulous about hygiene.Ants Do Poop and Ant Toilets Are RealAnt colony underground with ant chambers. (Image Credit: Lidok_L/Shutterstock)To keep themselves and their nests clean, ants have evolved some interesting housekeeping strategies. Some types of ants actually have toilets — or at least something we might call toilets. Their nests are very complicated, with lots of different tunnels and chambers, explains Ware, and one of those chambers is a toilet chamber. Ants don’t visit the toilet when they feel the call of nature. Instead, worker ants who are on latrine duty collect the poop and carry it to the toilet chamber, which is located far away from other parts of the nest. What Does Ant Poop Look Like? This isn’t as messy a chore as it sounds. Like most insects, ants are water-limited, says Ware, so they try to get as much liquid out of their food as possible. This results in small, hard, usually black or brownish pellets of poop. The poop is dry and hard enough so that for ant species that don’t have indoor toilet chambers, the workers can just kick the poop out of the nest.Ants Use Poop as FertilizerWhether they isolate it in a toilet chamber or kick it to the curb, ants don’t keep their waste around. Well, at least most types of ants don’t. Some ants find a use for that stuff. One such species is the leafcutter ant. “They basically take little clippings of leaves and use these leaves to grow a very particular fungus that they then eat,” says Ware. “They don't eat the leaves, they eat the fungus.” And yep, they use their poop to fertilize their crops. “They’re basically gardeners,” Ware says. If you’d like to see leafcutter ants at work in their gardens and you happen to be in the New York City area, drop by the American Museum of Natural History. They have a large colony of fungus-gardening ants on display.Other Insects That Use ToiletsAnts may have toilets, but termites have even wilder ways of dealing with their wastes. Termites and ants might seem similar at first sight, but they aren’t closely related. Ants are more closely related to bees, while termites are more closely related to cockroaches, explains Aram Mikaelyan, an entomologist at North Carolina State University who studies the co-evolution of insects and their gut microbiomes. So ants’ and termites’ styles of social living evolved independently, and their solutions to the waste problem are quite different.“Termites have found a way to not distance themselves from the feces,” says Mikaelyan. “Instead, they use the feces itself as building material.” They’re able to do this because they feed on wood, Mikaelyan explains. When wood passes through the termites’ digestive systems into the poop, it enables a type of bacteria called Actinobacteria. These bacteria are the source of many antibiotics that humans use. (Leafcutter ants also use Actinobacteria to keep their fungus gardens free of parasites.) So that unusual building material acts as a disinfectant. Mikaelyan describes it as “a living disinfectant wall, like a Clorox wall, almost.”Insect HygieneIt may seem surprising that ants and termites are so tidy and concerned with hygiene, but it’s really not uncommon. “Insects in general are cleaner than we think,” says Ware. “We often think of insects as being really gross, but most insects don’t want to lie in their own filth.”Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:The American Society of Microbiology. The Leaf-cutter Ant’s 50 Million Years of FarmingAvery Hurt is a freelance science journalist. In addition to writing for Discover, she writes regularly for a variety of outlets, both print and online, including National Geographic, Science News Explores, Medscape, and WebMD. She’s the author of Bullet With Your Name on It: What You Will Probably Die From and What You Can Do About It, Clerisy Press 2007, as well as several books for young readers. Avery got her start in journalism while attending university, writing for the school newspaper and editing the student non-fiction magazine. Though she writes about all areas of science, she is particularly interested in neuroscience, the science of consciousness, and AI–interests she developed while earning a degree in philosophy.
    0 Σχόλια 0 Μοιράστηκε
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Σχόλια 0 Μοιράστηκε
  • As a former Xbox 360 owner I don’t understand Xbox today – Reader’s Feature

    As a former Xbox 360 owner I don’t understand Xbox today – Reader’s Feature

    GameCentral

    Published June 15, 2025 1:00am

    Xbox 360 is coming up to its 20th anniversaryA reader looks back on the Xbox 360 era and is frustrated at how things have evolved since then, with ROG Xbox Ally and the move towards multiformat releases.
    I though the Xbox Games Showcase on Sunday was pretty good. Like Sony’s State of Play, it was mostly third party games but there was some interesting stuff there and I think overall the vibe was better than from Sony. I liked the look of High On Life 2, There Are No Ghosts At The Grand, and Cronos: The New Dawn the best but there was a lot of potentially cool games – I’d include Keeper, because it looked interestingly weird, but I don’t feel Double Fine are ever very good at gameplay.
    The biggest news out of the event was the new portable with the terrible name: Asus ROG Xbox Ally. I bet you can just imagine some parent asking that for that at shop at Christmas, to buy their kid? Not that that would ever happen because the thing’s going to be stupidly expensive.
    It seemed like a distraction, a small experiment at best, and I didn’t really pay much attention to it, especially as I already have a Steam Deck. But then today I read that Microsoft has cancelled its plans for their next gen portable and that actually this ridiculously named non-Xbox device may end up being the future of gaming for Microsoft.
    I’ve always preferred Xbox as my console as choice, probably because I was always a PC gamer before that. Although now I look back at things I have to admit that I only got the Xbox One out of brand loyalty and I wouldn’t have if I’d been thinking about it more clearly.
    By that point I was in too deep and so I bought the Xbox Series X/S out of muscle memory more than anything, wasn’t I proven to be a chump?
    What frustrates me most about Xbox at the moment is how indecisive it seems. I almost didn’t watch the Xbox Games Showcase because I knew I’d have to see Phil Spencer, or one of his goons, grinning into the camera, as if nothing is wrong. And, of course, that’s exactly what he did, ‘hinting’ about the return of Halo, as if everyone was going to be pumping the air to hear about that.

    Expert, exclusive gaming analysis

    Sign up to the GameCentral newsletter for a unique take on the week in gaming, alongside the latest reviews and more. Delivered to your inbox every Saturday morning.

    News flash, Phil: no one cares. You’ve run that series into the ground, like all the other Xbox exclusives, to the point where they just feel old fashioned and tired. Old school fans don’t care and newer ones definitely don’t. It may sell okay at first on PlayStation 5, but only out of curiosity and as a kind of celebration that Sony has finally defeated Microsoft.
    To all extents and purposes, Xbox is now third party. The only thing that makes them not is that they still make their own console hardware but how long is that going to last? The ROG Ally is made by Asus and if Microsoft don’t make a handheld are they really going to put out a home console instead? That’s going to cost a lot of money in R&D and marketing and everything else, and I don’t know who could argue that it’s got a chance of selling more than the Xbox Series X/S.
    Phil Spencer has been talking about making a handheld for years and yet suddenly it’s not going to happen? Is there anything that is set in stone? I even heard people talking about them going back to having exclusives with the next generation, if it seemed like things were working out.
    I loved my Xbox 360, it’s still my favourite console of all time – the perfect balance between modern and retro games – but its golden era is a long time ago now, well over a decade. Xbox at the time was the new kid on the block, full of new ideas and daring to what Sony wouldn’t or couldn’t. When was the last time Xbox did anything like that? Game Pass probably, and that hasn’t worked out at all well.

    More Trending

    Nothing has, ever since that disastrous Xbox One reveal, and I just don’t understand how a company with basically infinite resources, and which already owns half the games industry, can be such a hopeless mess. I’m just sticking with PC from now and in the future, I’m going to pretend the Xbox 360 was my one and only console.
    By reader Cramersauce

    Xbox One – not a good follow-up to the Xbox 360The reader’s features do not necessarily represent the views of GameCentral or Metro.
    You can submit your own 500 to 600-word reader feature at any time, which if used will be published in the next appropriate weekend slot. Just contact us at gamecentral@metro.co.uk or use our Submit Stuff page and you won’t need to send an email.

    GameCentral
    Sign up for exclusive analysis, latest releases, and bonus community content.
    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    #former #xbox #owner #dont #understand
    As a former Xbox 360 owner I don’t understand Xbox today – Reader’s Feature
    As a former Xbox 360 owner I don’t understand Xbox today – Reader’s Feature GameCentral Published June 15, 2025 1:00am Xbox 360 is coming up to its 20th anniversaryA reader looks back on the Xbox 360 era and is frustrated at how things have evolved since then, with ROG Xbox Ally and the move towards multiformat releases. I though the Xbox Games Showcase on Sunday was pretty good. Like Sony’s State of Play, it was mostly third party games but there was some interesting stuff there and I think overall the vibe was better than from Sony. I liked the look of High On Life 2, There Are No Ghosts At The Grand, and Cronos: The New Dawn the best but there was a lot of potentially cool games – I’d include Keeper, because it looked interestingly weird, but I don’t feel Double Fine are ever very good at gameplay. The biggest news out of the event was the new portable with the terrible name: Asus ROG Xbox Ally. I bet you can just imagine some parent asking that for that at shop at Christmas, to buy their kid? Not that that would ever happen because the thing’s going to be stupidly expensive. It seemed like a distraction, a small experiment at best, and I didn’t really pay much attention to it, especially as I already have a Steam Deck. But then today I read that Microsoft has cancelled its plans for their next gen portable and that actually this ridiculously named non-Xbox device may end up being the future of gaming for Microsoft. I’ve always preferred Xbox as my console as choice, probably because I was always a PC gamer before that. Although now I look back at things I have to admit that I only got the Xbox One out of brand loyalty and I wouldn’t have if I’d been thinking about it more clearly. By that point I was in too deep and so I bought the Xbox Series X/S out of muscle memory more than anything, wasn’t I proven to be a chump? What frustrates me most about Xbox at the moment is how indecisive it seems. I almost didn’t watch the Xbox Games Showcase because I knew I’d have to see Phil Spencer, or one of his goons, grinning into the camera, as if nothing is wrong. And, of course, that’s exactly what he did, ‘hinting’ about the return of Halo, as if everyone was going to be pumping the air to hear about that. Expert, exclusive gaming analysis Sign up to the GameCentral newsletter for a unique take on the week in gaming, alongside the latest reviews and more. Delivered to your inbox every Saturday morning. News flash, Phil: no one cares. You’ve run that series into the ground, like all the other Xbox exclusives, to the point where they just feel old fashioned and tired. Old school fans don’t care and newer ones definitely don’t. It may sell okay at first on PlayStation 5, but only out of curiosity and as a kind of celebration that Sony has finally defeated Microsoft. To all extents and purposes, Xbox is now third party. The only thing that makes them not is that they still make their own console hardware but how long is that going to last? The ROG Ally is made by Asus and if Microsoft don’t make a handheld are they really going to put out a home console instead? That’s going to cost a lot of money in R&D and marketing and everything else, and I don’t know who could argue that it’s got a chance of selling more than the Xbox Series X/S. Phil Spencer has been talking about making a handheld for years and yet suddenly it’s not going to happen? Is there anything that is set in stone? I even heard people talking about them going back to having exclusives with the next generation, if it seemed like things were working out. I loved my Xbox 360, it’s still my favourite console of all time – the perfect balance between modern and retro games – but its golden era is a long time ago now, well over a decade. Xbox at the time was the new kid on the block, full of new ideas and daring to what Sony wouldn’t or couldn’t. When was the last time Xbox did anything like that? Game Pass probably, and that hasn’t worked out at all well. More Trending Nothing has, ever since that disastrous Xbox One reveal, and I just don’t understand how a company with basically infinite resources, and which already owns half the games industry, can be such a hopeless mess. I’m just sticking with PC from now and in the future, I’m going to pretend the Xbox 360 was my one and only console. By reader Cramersauce Xbox One – not a good follow-up to the Xbox 360The reader’s features do not necessarily represent the views of GameCentral or Metro. You can submit your own 500 to 600-word reader feature at any time, which if used will be published in the next appropriate weekend slot. Just contact us at gamecentral@metro.co.uk or use our Submit Stuff page and you won’t need to send an email. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy #former #xbox #owner #dont #understand
    METRO.CO.UK
    As a former Xbox 360 owner I don’t understand Xbox today – Reader’s Feature
    As a former Xbox 360 owner I don’t understand Xbox today – Reader’s Feature GameCentral Published June 15, 2025 1:00am Xbox 360 is coming up to its 20th anniversary (Microsoft) A reader looks back on the Xbox 360 era and is frustrated at how things have evolved since then, with ROG Xbox Ally and the move towards multiformat releases. I though the Xbox Games Showcase on Sunday was pretty good. Like Sony’s State of Play, it was mostly third party games but there was some interesting stuff there and I think overall the vibe was better than from Sony. I liked the look of High On Life 2, There Are No Ghosts At The Grand, and Cronos: The New Dawn the best but there was a lot of potentially cool games – I’d include Keeper, because it looked interestingly weird, but I don’t feel Double Fine are ever very good at gameplay. The biggest news out of the event was the new portable with the terrible name: Asus ROG Xbox Ally. I bet you can just imagine some parent asking that for that at shop at Christmas, to buy their kid? Not that that would ever happen because the thing’s going to be stupidly expensive. It seemed like a distraction, a small experiment at best, and I didn’t really pay much attention to it, especially as I already have a Steam Deck. But then today I read that Microsoft has cancelled its plans for their next gen portable and that actually this ridiculously named non-Xbox device may end up being the future of gaming for Microsoft. I’ve always preferred Xbox as my console as choice, probably because I was always a PC gamer before that. Although now I look back at things I have to admit that I only got the Xbox One out of brand loyalty and I wouldn’t have if I’d been thinking about it more clearly. By that point I was in too deep and so I bought the Xbox Series X/S out of muscle memory more than anything, wasn’t I proven to be a chump? What frustrates me most about Xbox at the moment is how indecisive it seems. I almost didn’t watch the Xbox Games Showcase because I knew I’d have to see Phil Spencer, or one of his goons, grinning into the camera, as if nothing is wrong. And, of course, that’s exactly what he did, ‘hinting’ about the return of Halo, as if everyone was going to be pumping the air to hear about that. Expert, exclusive gaming analysis Sign up to the GameCentral newsletter for a unique take on the week in gaming, alongside the latest reviews and more. Delivered to your inbox every Saturday morning. News flash, Phil: no one cares. You’ve run that series into the ground, like all the other Xbox exclusives, to the point where they just feel old fashioned and tired. Old school fans don’t care and newer ones definitely don’t. It may sell okay at first on PlayStation 5, but only out of curiosity and as a kind of celebration that Sony has finally defeated Microsoft. To all extents and purposes, Xbox is now third party. The only thing that makes them not is that they still make their own console hardware but how long is that going to last? The ROG Ally is made by Asus and if Microsoft don’t make a handheld are they really going to put out a home console instead? That’s going to cost a lot of money in R&D and marketing and everything else, and I don’t know who could argue that it’s got a chance of selling more than the Xbox Series X/S. Phil Spencer has been talking about making a handheld for years and yet suddenly it’s not going to happen? Is there anything that is set in stone? I even heard people talking about them going back to having exclusives with the next generation, if it seemed like things were working out. I loved my Xbox 360, it’s still my favourite console of all time – the perfect balance between modern and retro games – but its golden era is a long time ago now, well over a decade. Xbox at the time was the new kid on the block, full of new ideas and daring to what Sony wouldn’t or couldn’t. When was the last time Xbox did anything like that? Game Pass probably, and that hasn’t worked out at all well. More Trending Nothing has, ever since that disastrous Xbox One reveal, and I just don’t understand how a company with basically infinite resources, and which already owns half the games industry, can be such a hopeless mess. I’m just sticking with PC from now and in the future, I’m going to pretend the Xbox 360 was my one and only console. By reader Cramersauce Xbox One – not a good follow-up to the Xbox 360 (Microsoft) The reader’s features do not necessarily represent the views of GameCentral or Metro. You can submit your own 500 to 600-word reader feature at any time, which if used will be published in the next appropriate weekend slot. Just contact us at gamecentral@metro.co.uk or use our Submit Stuff page and you won’t need to send an email. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 Σχόλια 0 Μοιράστηκε
  • I’m going to say it: Mario Kart World is not as good as it should be – Reader’s Feature

    I’m going to say it: Mario Kart World is not as good as it should be – Reader’s Feature

    GameCentral

    Published June 15, 2025 6:00am

    Mario Kart World – is it a let-down?A reader is unimpressed by Mario Kart World on the Nintendo Switch 2 and argues that the controversial free roam mode is not its only issue.
    As a day one Nintendo Switch 2 owner I have to admit I’m a little disappointed. Not with the console itself, which I think is pretty much prefect for the price and what it has to do, but with the only game worth getting at launch: Mario Kart World.
    Now, I don’t think it’s terrible, but I do think that not only is it not as good at Mario Kart 8 but that it’s kind of a flawed experiment and one of the weakest entries in the whole series. But I’ll talk about the positives first, just to show it’s not all bad.
    Knockout Tour is great, I think everyone would agree. A bit boring in single-player, but fantastic online and the game’s best feature. I also like all the weird extra characters, although how you unlock them and the costumes is very random and unsatisfying. The open world is also very nicely designed in its own right, and very large, but… that’s kind of all I’ve got in terms of praise.
    First, I’ll get the obvious thing out of the way: the open world is completely wasted. None of the challenges in it are interesting, if you can even find them, and a lot of them are overly hard and frustrating. There’s no story or dialogue or anything. You just drive around at random in free roam and hope you come across something interesting, which you almost certainly won’t.
    If any game was born to have fetch quests in it, it was this and yet there’s nothing like that. It all feels like it’s waiting for the actual game to be dropped onto the world but there’s nothing there. Maybe it will come in DLC, but even if it’s free why wasn’t it there from the start? Why wouldn’t you go all out for basically your only launch game? It’s baffling.
    But for me that’s not the real problem because, rightly or wrongly, free roam is really just a side show. My problem is that the actual racing in the two main modes is very dull. It may not seem that way when you’ve got a dozen people firing shells at you at once, but that gets old very quickly, and it doesn’t actually happen that much, especially in single-player.

    Expert, exclusive gaming analysis

    Sign up to the GameCentral newsletter for a unique take on the week in gaming, alongside the latest reviews and more. Delivered to your inbox every Saturday morning.

    Most of the time you’re just driving alongand taking slow bend after slow bend in what aren’t even really courses at all. Knockout Tour is worst for this, because you’re essentially driving point-to-point and it really does feel like you’re just road racing, with nothing in terms of exciting or unexpected track design.
    Grand Prix is barely any better either, with very few lapped races and too many wide roads that are too easy to take. I went back to play Mario Kart 8 and it’s filled with tightly designed courses and weird and physically impossible track designs. It seems a weird to say but Mario Kart World is basically too realistic, or rather too mundane in its design. Everything about it feels flabby and under-designed.
    Sure, occasionally you fly vertically up into the air or down the side of a volcano, but when you get down to the actual racing it’s so plain and boring. The tracks aren’t designed for time trials and racing skill, they’re designed for power-ups and 24 player online races, and that has ruined everything.

    More Trending

    I’m sure other people will enjoy the game but as someone that has enjoyed every previous Mario Kart it’s not for me. Which means I’m now left with a neat new console with nothing to play on it, except for old Switch 1 games. And that will definitely include Mario Kart 8.
    By reader Lambent

    What is the future of Mario Kart World?The reader’s features do not necessarily represent the views of GameCentral or Metro.
    You can submit your own 500 to 600-word reader feature at any time, which if used will be published in the next appropriate weekend slot. Just contact us at gamecentral@metro.co.uk or use our Submit Stuff page and you won’t need to send an email.

    GameCentral
    Sign up for exclusive analysis, latest releases, and bonus community content.
    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    #going #say #mario #kart #world
    I’m going to say it: Mario Kart World is not as good as it should be – Reader’s Feature
    I’m going to say it: Mario Kart World is not as good as it should be – Reader’s Feature GameCentral Published June 15, 2025 6:00am Mario Kart World – is it a let-down?A reader is unimpressed by Mario Kart World on the Nintendo Switch 2 and argues that the controversial free roam mode is not its only issue. As a day one Nintendo Switch 2 owner I have to admit I’m a little disappointed. Not with the console itself, which I think is pretty much prefect for the price and what it has to do, but with the only game worth getting at launch: Mario Kart World. Now, I don’t think it’s terrible, but I do think that not only is it not as good at Mario Kart 8 but that it’s kind of a flawed experiment and one of the weakest entries in the whole series. But I’ll talk about the positives first, just to show it’s not all bad. Knockout Tour is great, I think everyone would agree. A bit boring in single-player, but fantastic online and the game’s best feature. I also like all the weird extra characters, although how you unlock them and the costumes is very random and unsatisfying. The open world is also very nicely designed in its own right, and very large, but… that’s kind of all I’ve got in terms of praise. First, I’ll get the obvious thing out of the way: the open world is completely wasted. None of the challenges in it are interesting, if you can even find them, and a lot of them are overly hard and frustrating. There’s no story or dialogue or anything. You just drive around at random in free roam and hope you come across something interesting, which you almost certainly won’t. If any game was born to have fetch quests in it, it was this and yet there’s nothing like that. It all feels like it’s waiting for the actual game to be dropped onto the world but there’s nothing there. Maybe it will come in DLC, but even if it’s free why wasn’t it there from the start? Why wouldn’t you go all out for basically your only launch game? It’s baffling. But for me that’s not the real problem because, rightly or wrongly, free roam is really just a side show. My problem is that the actual racing in the two main modes is very dull. It may not seem that way when you’ve got a dozen people firing shells at you at once, but that gets old very quickly, and it doesn’t actually happen that much, especially in single-player. Expert, exclusive gaming analysis Sign up to the GameCentral newsletter for a unique take on the week in gaming, alongside the latest reviews and more. Delivered to your inbox every Saturday morning. Most of the time you’re just driving alongand taking slow bend after slow bend in what aren’t even really courses at all. Knockout Tour is worst for this, because you’re essentially driving point-to-point and it really does feel like you’re just road racing, with nothing in terms of exciting or unexpected track design. Grand Prix is barely any better either, with very few lapped races and too many wide roads that are too easy to take. I went back to play Mario Kart 8 and it’s filled with tightly designed courses and weird and physically impossible track designs. It seems a weird to say but Mario Kart World is basically too realistic, or rather too mundane in its design. Everything about it feels flabby and under-designed. Sure, occasionally you fly vertically up into the air or down the side of a volcano, but when you get down to the actual racing it’s so plain and boring. The tracks aren’t designed for time trials and racing skill, they’re designed for power-ups and 24 player online races, and that has ruined everything. More Trending I’m sure other people will enjoy the game but as someone that has enjoyed every previous Mario Kart it’s not for me. Which means I’m now left with a neat new console with nothing to play on it, except for old Switch 1 games. And that will definitely include Mario Kart 8. By reader Lambent What is the future of Mario Kart World?The reader’s features do not necessarily represent the views of GameCentral or Metro. You can submit your own 500 to 600-word reader feature at any time, which if used will be published in the next appropriate weekend slot. Just contact us at gamecentral@metro.co.uk or use our Submit Stuff page and you won’t need to send an email. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy #going #say #mario #kart #world
    METRO.CO.UK
    I’m going to say it: Mario Kart World is not as good as it should be – Reader’s Feature
    I’m going to say it: Mario Kart World is not as good as it should be – Reader’s Feature GameCentral Published June 15, 2025 6:00am Mario Kart World – is it a let-down? (Nintendo) A reader is unimpressed by Mario Kart World on the Nintendo Switch 2 and argues that the controversial free roam mode is not its only issue. As a day one Nintendo Switch 2 owner I have to admit I’m a little disappointed. Not with the console itself, which I think is pretty much prefect for the price and what it has to do, but with the only game worth getting at launch: Mario Kart World. Now, I don’t think it’s terrible, but I do think that not only is it not as good at Mario Kart 8 but that it’s kind of a flawed experiment and one of the weakest entries in the whole series. But I’ll talk about the positives first, just to show it’s not all bad. Knockout Tour is great, I think everyone would agree. A bit boring in single-player, but fantastic online and the game’s best feature. I also like all the weird extra characters, although how you unlock them and the costumes is very random and unsatisfying. The open world is also very nicely designed in its own right, and very large, but… that’s kind of all I’ve got in terms of praise. First, I’ll get the obvious thing out of the way: the open world is completely wasted. None of the challenges in it are interesting, if you can even find them, and a lot of them are overly hard and frustrating. There’s no story or dialogue or anything. You just drive around at random in free roam and hope you come across something interesting, which you almost certainly won’t. If any game was born to have fetch quests in it, it was this and yet there’s nothing like that. It all feels like it’s waiting for the actual game to be dropped onto the world but there’s nothing there. Maybe it will come in DLC, but even if it’s free why wasn’t it there from the start? Why wouldn’t you go all out for basically your only launch game? It’s baffling. But for me that’s not the real problem because, rightly or wrongly, free roam is really just a side show. My problem is that the actual racing in the two main modes is very dull. It may not seem that way when you’ve got a dozen people firing shells at you at once, but that gets old very quickly, and it doesn’t actually happen that much, especially in single-player. Expert, exclusive gaming analysis Sign up to the GameCentral newsletter for a unique take on the week in gaming, alongside the latest reviews and more. Delivered to your inbox every Saturday morning. Most of the time you’re just driving along (even 150cc isn’t that fast) and taking slow bend after slow bend in what aren’t even really courses at all. Knockout Tour is worst for this, because you’re essentially driving point-to-point and it really does feel like you’re just road racing, with nothing in terms of exciting or unexpected track design. Grand Prix is barely any better either, with very few lapped races and too many wide roads that are too easy to take. I went back to play Mario Kart 8 and it’s filled with tightly designed courses and weird and physically impossible track designs. It seems a weird to say but Mario Kart World is basically too realistic, or rather too mundane in its design. Everything about it feels flabby and under-designed. Sure, occasionally you fly vertically up into the air or down the side of a volcano, but when you get down to the actual racing it’s so plain and boring. The tracks aren’t designed for time trials and racing skill, they’re designed for power-ups and 24 player online races, and that has ruined everything. More Trending I’m sure other people will enjoy the game but as someone that has enjoyed every previous Mario Kart it’s not for me. Which means I’m now left with a neat new console with nothing to play on it, except for old Switch 1 games. And that will definitely include Mario Kart 8. By reader Lambent What is the future of Mario Kart World? (Nintendo) The reader’s features do not necessarily represent the views of GameCentral or Metro. You can submit your own 500 to 600-word reader feature at any time, which if used will be published in the next appropriate weekend slot. Just contact us at gamecentral@metro.co.uk or use our Submit Stuff page and you won’t need to send an email. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 Σχόλια 0 Μοιράστηκε
Αναζήτηση αποτελεσμάτων