• competitor analysis, website traffic, traffic comparison, SEO strategies, digital marketing, engagement metrics, user demographics, website analytics, 2025 trends

    In the cutthroat world of digital marketing, knowing your competitors’ website traffic is akin to knowing the secret ingredient of a rival chef’s famous dish. In 2025, analyzing and comparing competitor website traffic has evolved into an art form. Forget the days of simply glancing at their homepages; today, we dissect, delve, and de...
    competitor analysis, website traffic, traffic comparison, SEO strategies, digital marketing, engagement metrics, user demographics, website analytics, 2025 trends In the cutthroat world of digital marketing, knowing your competitors’ website traffic is akin to knowing the secret ingredient of a rival chef’s famous dish. In 2025, analyzing and comparing competitor website traffic has evolved into an art form. Forget the days of simply glancing at their homepages; today, we dissect, delve, and de...
    # How to Analyze & Compare Competitor Website Traffic in 2025
    competitor analysis, website traffic, traffic comparison, SEO strategies, digital marketing, engagement metrics, user demographics, website analytics, 2025 trends In the cutthroat world of digital marketing, knowing your competitors’ website traffic is akin to knowing the secret ingredient of a rival chef’s famous dish. In 2025, analyzing and comparing competitor website traffic has evolved...
    Like
    Love
    Wow
    Sad
    Angry
    195
    1 Σχόλια 0 Μοιράστηκε
  • The 20 Worst Movies of the Last 20 Years

    There is no good without bad. It’s a cliché, but it’s true. How can you fully appreciate an exceptional work of art without comparing it to one that didn’t work? A truly awful movie puts a truly great masterpiece into perspective.So consider this piece a study in perspectives. No one who made any of the 20 movies below, our picks for the 20 worst movies of the last 20 years, set out to produce a bad movie. But it happened anyway, despite all their hard work and good intentions. Writing is hard. Casting is hard. Directing is hard. Movies are hard.If you’re thinking about using this list to help program your friends’ next Bad Movie Night, just keep in mind: Some of the films below are not so-bad-they’re-good. They’re just plain awful.Proceed with caution and remember: There is no good without bad ... but sometimes it’s better to just watch a good movie instead of forcing the comparison.The 20 Worst Movies of the Last 20 YearsMovies can bring us to the highest highs and the lowest lows. These 20 films of the last 20 years are very much the latter.READ MORE: 25 Actors Who Turned Down Roles in Huge MoviesGet our free mobile appThe 20 Best Movies of the Last 20 YearsThe 20 films of the last two decades that you absolutely need to see.Categories: Lists, Longform, Movie News, Special Features
    #worst #movies #last #years
    The 20 Worst Movies of the Last 20 Years
    There is no good without bad. It’s a cliché, but it’s true. How can you fully appreciate an exceptional work of art without comparing it to one that didn’t work? A truly awful movie puts a truly great masterpiece into perspective.So consider this piece a study in perspectives. No one who made any of the 20 movies below, our picks for the 20 worst movies of the last 20 years, set out to produce a bad movie. But it happened anyway, despite all their hard work and good intentions. Writing is hard. Casting is hard. Directing is hard. Movies are hard.If you’re thinking about using this list to help program your friends’ next Bad Movie Night, just keep in mind: Some of the films below are not so-bad-they’re-good. They’re just plain awful.Proceed with caution and remember: There is no good without bad ... but sometimes it’s better to just watch a good movie instead of forcing the comparison.The 20 Worst Movies of the Last 20 YearsMovies can bring us to the highest highs and the lowest lows. These 20 films of the last 20 years are very much the latter.READ MORE: 25 Actors Who Turned Down Roles in Huge MoviesGet our free mobile appThe 20 Best Movies of the Last 20 YearsThe 20 films of the last two decades that you absolutely need to see.Categories: Lists, Longform, Movie News, Special Features #worst #movies #last #years
    SCREENCRUSH.COM
    The 20 Worst Movies of the Last 20 Years
    There is no good without bad. It’s a cliché, but it’s true. How can you fully appreciate an exceptional work of art without comparing it to one that didn’t work? A truly awful movie puts a truly great masterpiece into perspective.So consider this piece a study in perspectives. No one who made any of the 20 movies below, our picks for the 20 worst movies of the last 20 years, set out to produce a bad movie. But it happened anyway, despite all their hard work and good intentions. Writing is hard. Casting is hard. Directing is hard. Movies are hard.(Okay, strike that. At least one filmmaker on the list reportedly exploited a tax loophole that meant investors only had to pay taxes on investments in films that turned a profit, leaving a financial incentive for a movie to flop. So maybe someone occasionally does set out to make a bad movie. Or at least, the movie’s quality is of far lesser concern than, say, sales to foreign distributors. But it’s rare.)If you’re thinking about using this list to help program your friends’ next Bad Movie Night, just keep in mind: Some of the films below are not so-bad-they’re-good. They’re just plain awful. (The tax loophole guy’s film, for example, that’s a real tough sit.) Proceed with caution and remember: There is no good without bad ... but sometimes it’s better to just watch a good movie instead of forcing the comparison.The 20 Worst Movies of the Last 20 Years (2005-2024)Movies can bring us to the highest highs and the lowest lows. These 20 films of the last 20 years are very much the latter.READ MORE: 25 Actors Who Turned Down Roles in Huge MoviesGet our free mobile appThe 20 Best Movies of the Last 20 Years (2005-2024)The 20 films of the last two decades that you absolutely need to see.Categories: Lists, Longform, Movie News, Special Features
    Like
    Love
    Wow
    Sad
    Angry
    411
    2 Σχόλια 0 Μοιράστηκε
  • iPad Air vs reMarkable Paper Pro: Which tablet is best for note taking? [Updated]

    Over the past few months, I’ve had the pleasure of testing out the reMarkable Paper Pro. You can read my full review here, but in short, it gets everything right about the note taking experience.
    Despite being an e-ink tablet, it does get quite pricey. However, there are certainly some fantastic parts of the experience that make it worth comparing to an iPad Air, depending on what you’re looking for in a note taking device for school, work, or whatever else.

    Updated June 15th to reflect reMarkable’s new post-tariff pricing.
    Overview
    Since the reMarkable Paper Pro comes in at with the reMarkable Marker Plus included, it likely makes most sense to compare this against Apple’s iPad Air 11-inch. That comes in at without an Apple Pencil, and adding in the Apple Pencil Pro will run you an additional The equivalent iPad setup will run you more than the reMarkable Paper Pro.
    Given the fact that iPad Air‘s regularly go on sale, it’d be fair to say they’re roughly on the same playing field. So, for a reMarkable Paper Pro setup, versus for a comparable iPad Air setup. Which is better for you?
    Obviously, the iPad Air has one key advantage: It runs iOS, has millions of apps available, can browse the web, play games, stream TV shows/movies, and much more. To some, that might end the comparison and make the iPad a clear winner, but I disagree.
    Yes, if you want your tablet to do all of those things for you, the iPad Air is a no brainer. At the end of the day, the iPad Air is a general purpose tablet that’ll do a lot more for you.
    However, if you also have a laptop to accompany your tablet, I’d argue that the iPad Air may fall into a category of slight redundance. Most things you’d want to do on the iPad can be done on a laptop, excluding any sort of touchscreen/stylus reliant features.
    iPads are great, and if you want that – you should pick that. However, I have an alternative argument to offer…
    The reMarkable Paper Pro does one thing really well: note taking. At first thought, you might think: why would I pay so much for a device that only does one thing?
    Well, that’s because it does that one thing really well. There’s also a second side to this argument: focus.
    It’s much easier to focus on what you’re doing when the device isn’t capable of anything else. If you’re taking notes while studying, you could easily see a notification or have the temptation to check notification center. Or, if you’re reading an e-book, you could easily choose to swipe up and get into another app.
    The best thing about the reMarkable Paper Pro is that you can’t easily get lost in the world of modern technology, while still having important technological features like cloud backup of your notes. Plus, you don’t have to worry about carrying around physical paper.
    One last thing – the reMarkable Paper Pro also has rubber feet on the back, so if you place it down flat on a table caseless, you don’t have to worry about scratching it up.
    Spec comparison
    Here’s a quick rundown of all of the key specs between the two devices. reMarkable Paper Pro‘s strengths definitely lie in battery, form factor, and stylus. iPad has some rather neat features with the Apple Pencil Pro, and also clears in the display category. Both devices also offer keyboards for typed notes, though only the iPad offers a trackpad.
    Display– 10.9-inch LCD display– Glossy glass– 2360 × 1640 at 264 ppi– 11.8-inch Color e-ink display– Paper-feeling textured glass– 2160 × 1620 at 229 ppiHardware– 6.1mm thin– Anodized aluminum coating– Weighs 461g w/o Pencil Pro– 5.1mm thin– Textured aluminum edges– Weighs 360g w/ Marker attachedStylus– Magnetically charges from device– Supports tilt/pressure sensitivity– Low latency– Matte plastic build– Squeeze features, double tap gestures– Magnetically charges from device– Supports tilt/pressure sensitivity– Ultra-low latency– Premium textured aluminum build– Built in eraser on the bottomBattery life– Up to 10 hours of web browsing– Recharges to 100% in 2-3 hrs– Up to 14 days of typical usage– Fast charges to 90% in 90 minsPrice–for iPad Air–for Pencil Pro– bundled with Marker Plus
    Wrap up
    All in all, I’m not going to try to convince anyone that wanted to buy an iPad that they should buy a reMarkable Paper Pro. You can’t beat the fact that the iPad Air will do a lot more, for roughly the same cost.
    But, if you’re not buying this to be a primary computing device, I’d argue that the reMarkable Paper Pro is a worthy alternative, especially if you really just want something you can zone in on. The reMarkable Paper Pro feels a lot nicer to write on, has substantially longer battery life, and really masters a minimalist form of digital note taking.
    Buy M3 iPad Air on Amazon:
    Buy reMarkable Paper Pro on Amazon:
    What do you think of these two tablets? Let us know in the comments.

    My favorite Apple accessory recommendations:
    Follow Michael: X/Twitter, Bluesky, Instagram

    Add 9to5Mac to your Google News feed. 

    FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    #ipad #air #remarkable #paper #pro
    iPad Air vs reMarkable Paper Pro: Which tablet is best for note taking? [Updated]
    Over the past few months, I’ve had the pleasure of testing out the reMarkable Paper Pro. You can read my full review here, but in short, it gets everything right about the note taking experience. Despite being an e-ink tablet, it does get quite pricey. However, there are certainly some fantastic parts of the experience that make it worth comparing to an iPad Air, depending on what you’re looking for in a note taking device for school, work, or whatever else. Updated June 15th to reflect reMarkable’s new post-tariff pricing. Overview Since the reMarkable Paper Pro comes in at with the reMarkable Marker Plus included, it likely makes most sense to compare this against Apple’s iPad Air 11-inch. That comes in at without an Apple Pencil, and adding in the Apple Pencil Pro will run you an additional The equivalent iPad setup will run you more than the reMarkable Paper Pro. Given the fact that iPad Air‘s regularly go on sale, it’d be fair to say they’re roughly on the same playing field. So, for a reMarkable Paper Pro setup, versus for a comparable iPad Air setup. Which is better for you? Obviously, the iPad Air has one key advantage: It runs iOS, has millions of apps available, can browse the web, play games, stream TV shows/movies, and much more. To some, that might end the comparison and make the iPad a clear winner, but I disagree. Yes, if you want your tablet to do all of those things for you, the iPad Air is a no brainer. At the end of the day, the iPad Air is a general purpose tablet that’ll do a lot more for you. However, if you also have a laptop to accompany your tablet, I’d argue that the iPad Air may fall into a category of slight redundance. Most things you’d want to do on the iPad can be done on a laptop, excluding any sort of touchscreen/stylus reliant features. iPads are great, and if you want that – you should pick that. However, I have an alternative argument to offer… The reMarkable Paper Pro does one thing really well: note taking. At first thought, you might think: why would I pay so much for a device that only does one thing? Well, that’s because it does that one thing really well. There’s also a second side to this argument: focus. It’s much easier to focus on what you’re doing when the device isn’t capable of anything else. If you’re taking notes while studying, you could easily see a notification or have the temptation to check notification center. Or, if you’re reading an e-book, you could easily choose to swipe up and get into another app. The best thing about the reMarkable Paper Pro is that you can’t easily get lost in the world of modern technology, while still having important technological features like cloud backup of your notes. Plus, you don’t have to worry about carrying around physical paper. One last thing – the reMarkable Paper Pro also has rubber feet on the back, so if you place it down flat on a table caseless, you don’t have to worry about scratching it up. Spec comparison Here’s a quick rundown of all of the key specs between the two devices. reMarkable Paper Pro‘s strengths definitely lie in battery, form factor, and stylus. iPad has some rather neat features with the Apple Pencil Pro, and also clears in the display category. Both devices also offer keyboards for typed notes, though only the iPad offers a trackpad. Display– 10.9-inch LCD display– Glossy glass– 2360 × 1640 at 264 ppi– 11.8-inch Color e-ink display– Paper-feeling textured glass– 2160 × 1620 at 229 ppiHardware– 6.1mm thin– Anodized aluminum coating– Weighs 461g w/o Pencil Pro– 5.1mm thin– Textured aluminum edges– Weighs 360g w/ Marker attachedStylus– Magnetically charges from device– Supports tilt/pressure sensitivity– Low latency– Matte plastic build– Squeeze features, double tap gestures– Magnetically charges from device– Supports tilt/pressure sensitivity– Ultra-low latency– Premium textured aluminum build– Built in eraser on the bottomBattery life– Up to 10 hours of web browsing– Recharges to 100% in 2-3 hrs– Up to 14 days of typical usage– Fast charges to 90% in 90 minsPrice–for iPad Air–for Pencil Pro– bundled with Marker Plus Wrap up All in all, I’m not going to try to convince anyone that wanted to buy an iPad that they should buy a reMarkable Paper Pro. You can’t beat the fact that the iPad Air will do a lot more, for roughly the same cost. But, if you’re not buying this to be a primary computing device, I’d argue that the reMarkable Paper Pro is a worthy alternative, especially if you really just want something you can zone in on. The reMarkable Paper Pro feels a lot nicer to write on, has substantially longer battery life, and really masters a minimalist form of digital note taking. Buy M3 iPad Air on Amazon: Buy reMarkable Paper Pro on Amazon: What do you think of these two tablets? Let us know in the comments. My favorite Apple accessory recommendations: Follow Michael: X/Twitter, Bluesky, Instagram Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel #ipad #air #remarkable #paper #pro
    9TO5MAC.COM
    iPad Air vs reMarkable Paper Pro: Which tablet is best for note taking? [Updated]
    Over the past few months, I’ve had the pleasure of testing out the reMarkable Paper Pro. You can read my full review here, but in short, it gets everything right about the note taking experience. Despite being an e-ink tablet, it does get quite pricey. However, there are certainly some fantastic parts of the experience that make it worth comparing to an iPad Air, depending on what you’re looking for in a note taking device for school, work, or whatever else. Updated June 15th to reflect reMarkable’s new post-tariff pricing. Overview Since the reMarkable Paper Pro comes in at $679 with the reMarkable Marker Plus included, it likely makes most sense to compare this against Apple’s iPad Air 11-inch. That comes in at $599 without an Apple Pencil, and adding in the Apple Pencil Pro will run you an additional $129. The equivalent iPad setup will run you $50 more than the reMarkable Paper Pro. Given the fact that iPad Air‘s regularly go on sale, it’d be fair to say they’re roughly on the same playing field. So, $679 for a reMarkable Paper Pro setup, versus $728 for a comparable iPad Air setup. Which is better for you? Obviously, the iPad Air has one key advantage: It runs iOS, has millions of apps available, can browse the web, play games, stream TV shows/movies, and much more. To some, that might end the comparison and make the iPad a clear winner, but I disagree. Yes, if you want your tablet to do all of those things for you, the iPad Air is a no brainer. At the end of the day, the iPad Air is a general purpose tablet that’ll do a lot more for you. However, if you also have a laptop to accompany your tablet, I’d argue that the iPad Air may fall into a category of slight redundance. Most things you’d want to do on the iPad can be done on a laptop, excluding any sort of touchscreen/stylus reliant features. iPads are great, and if you want that – you should pick that. However, I have an alternative argument to offer… The reMarkable Paper Pro does one thing really well: note taking. At first thought, you might think: why would I pay so much for a device that only does one thing? Well, that’s because it does that one thing really well. There’s also a second side to this argument: focus. It’s much easier to focus on what you’re doing when the device isn’t capable of anything else. If you’re taking notes while studying, you could easily see a notification or have the temptation to check notification center. Or, if you’re reading an e-book, you could easily choose to swipe up and get into another app. The best thing about the reMarkable Paper Pro is that you can’t easily get lost in the world of modern technology, while still having important technological features like cloud backup of your notes. Plus, you don’t have to worry about carrying around physical paper. One last thing – the reMarkable Paper Pro also has rubber feet on the back, so if you place it down flat on a table caseless, you don’t have to worry about scratching it up. Spec comparison Here’s a quick rundown of all of the key specs between the two devices. reMarkable Paper Pro‘s strengths definitely lie in battery, form factor, and stylus. iPad has some rather neat features with the Apple Pencil Pro, and also clears in the display category. Both devices also offer keyboards for typed notes, though only the iPad offers a trackpad. Display– 10.9-inch LCD display– Glossy glass– 2360 × 1640 at 264 ppi– 11.8-inch Color e-ink display– Paper-feeling textured glass– 2160 × 1620 at 229 ppiHardware– 6.1mm thin– Anodized aluminum coating– Weighs 461g w/o Pencil Pro– 5.1mm thin– Textured aluminum edges– Weighs 360g w/ Marker attachedStylus– Magnetically charges from device– Supports tilt/pressure sensitivity– Low latency (number unspecified)– Matte plastic build– Squeeze features, double tap gestures– Magnetically charges from device– Supports tilt/pressure sensitivity– Ultra-low latency (12ms)– Premium textured aluminum build– Built in eraser on the bottomBattery life– Up to 10 hours of web browsing– Recharges to 100% in 2-3 hrs– Up to 14 days of typical usage– Fast charges to 90% in 90 minsPrice– $599 ($529 on sale) for iPad Air– $129 ($99 on sale) for Pencil Pro– $679 bundled with Marker Plus Wrap up All in all, I’m not going to try to convince anyone that wanted to buy an iPad that they should buy a reMarkable Paper Pro. You can’t beat the fact that the iPad Air will do a lot more, for roughly the same cost. But, if you’re not buying this to be a primary computing device, I’d argue that the reMarkable Paper Pro is a worthy alternative, especially if you really just want something you can zone in on. The reMarkable Paper Pro feels a lot nicer to write on, has substantially longer battery life, and really masters a minimalist form of digital note taking. Buy M3 iPad Air on Amazon: Buy reMarkable Paper Pro on Amazon: What do you think of these two tablets? Let us know in the comments. My favorite Apple accessory recommendations: Follow Michael: X/Twitter, Bluesky, Instagram Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    Like
    Love
    Wow
    Sad
    Angry
    407
    2 Σχόλια 0 Μοιράστηκε
  • CD Projekt RED: TW4 has console first development with a 60fps target; 60fps on Series S will be "extremely challenging"

    DriftingSpirit
    Member

    Oct 25, 2017

    18,563

    They note how they usually start with PC and scale down, but they will be doing it the other way around this time to avoid issues with the console versions.

    4:15 for console focus and 60fps
    38:50 for the Series S comment 

    bsigg
    Member

    Oct 25, 2017

    25,153Inside The Witcher 4 Unreal Engine 5 Tech Demo: CD Projekt RED + Epic Deep Dive Interview



    www.resetera.com

     

    Skot
    Member

    Oct 30, 2017

    645

    720p on Series S incoming
     

    Bulby
    Prophet of Truth
    Member

    Oct 29, 2017

    6,006

    Berlin

    I think think any series s user will be happy with a beautiful 900p 30fps
     

    Chronos
    Member

    Oct 27, 2017

    1,249

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.
     

    HellofaMouse
    Member

    Oct 27, 2017

    8,551

    i wonder if this'll come out before the gen is over?

    good chance itll be a 2077 situation, cross-gen release with a broken ps6 version 

    logash
    Member

    Oct 27, 2017

    6,526

    This makes sense since they want to have good performance on lower end machines and they mentioned that it was easier to scale up than to scale down. They also mentioned their legacy on PC and how they plan on scaling it up high like they usually do on PC.
     

    KRT
    Member

    Aug 7, 2020

    247

    Series S was a mistake
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.
     

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    Bulby said:

    I think think any series s user will be happy with a beautiful 900p 30fps

    Click to expand...
    Click to shrink...

     

    Yuuber
    Member

    Oct 28, 2017

    4,540

    KRT said:

    Series S was a mistake

    Click to expand...
    Click to shrink...

    Can we stop with these stupid takes? For all we know it sold as much as Series X, helped several games have better optimization on bigger consoles and it will definitely help optimizing newer games to the Nintendo Switch 2. 

    MANTRA
    Member

    Feb 21, 2024

    1,198

    No one who cares about 60fps should be buying a Series S, just make it 30fps.
     

    Roytheone
    Member

    Oct 25, 2017

    6,185

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    They can just go for 30 fps instead on the Series S. No need for a special deal for that, that's allowed. 

    Matterhorn
    Member

    Feb 6, 2019

    254

    United States

    Hoping for a very nice looking 30fps Switch 2 version.
     

    Universal Acclaim
    Member

    Oct 5, 2024

    2,617

    Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the game can't be scaled down to 720-900p/60fps?
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    Matterhorn said:

    Hoping for a very nice looking 30fps Switch 2 version.

    Click to expand...
    Click to shrink...

    It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version.

    EDIT: And they probably need to redo all the assets.

    /

    Fortnite doesn't use Nanite and Lumen on Switch 2. 

    Last edited: Yesterday at 4:18 PM

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    Universal Acclaim said:

    Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the graphics can't be scaled down to 720p/60fps?

    Click to expand...
    Click to shrink...

    Graphics are the part of the game that can be scaled, it is CPU load that is the more difficult part, although devs have actually made cuts in the latter to increase performance mode fps viability. Even with this focus on 60fps performance modes, they are always going to have room to make a higher fidelity 30fps mode. Specifically with UE5 though, performance has been such a disaster all around and Epic seems to be taking it seriously now.
     

    Greywaren
    Member

    Jul 16, 2019

    13,530

    Spain

    60 fps target is fantastic, I wish it was the norm.
     

    julia crawford
    Took the red AND the blue pills
    Member

    Oct 27, 2017

    40,709

    i am very ok with lower fps on the series s, it is far more palatable than severe resolution drops with upscaling artifacts.
     

    Spoit
    Member

    Oct 28, 2017

    5,599

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back
     

    PLASTICA-MAN
    Member

    Oct 26, 2017

    29,563

    chris 1515 said:

    The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.

    Click to expand...
    Click to shrink...

    There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too.
    Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced.
    UE5 can still trace shadows if they want to push things even further. 

    overthewaves
    Member

    Sep 30, 2020

    1,203

    What about the PS5 handheld?
     

    nullpotential
    Member

    Jun 24, 2024

    87

    KRT said:

    Series S was a mistake

    Click to expand...
    Click to shrink...

    Consoles were a mistake. 

    GPU
    Member

    Oct 10, 2024

    1,075

    I really dont think Series S/X will be much of a factor by the time this game comes out.
     

    Lashley
    <<Tag Here>>
    Member

    Oct 25, 2017

    65,679

    Just make series s 480p 30fps
     

    pappacone
    Member

    Jan 10, 2020

    4,076

    Greywaren said:

    60 fps target is fantastic, I wish it was the norm.

    Click to expand...
    Click to shrink...

    It pretty much is
     

    Super
    Studied the Buster Sword
    Member

    Jan 29, 2022

    13,601

    I hope they can pull 60 FPS off in the full game.
     

    Theorry
    Member

    Oct 27, 2017

    69,045

    "target"

    Uh huh. We know how that is gonna go. 

    Jakartalado
    Member

    Oct 27, 2017

    2,818

    São Paulo, Brazil

    Skot said:

    720p on Series S incoming

    Click to expand...
    Click to shrink...

    If the PS5 is internally at 720p up to 900p, I seriously doubt that. 

    Revoltoftheunique
    Member

    Jan 23, 2022

    2,312

    It will be unstable 60fps with lots of stuttering.
     

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    KRT said:

    Series S was a mistake

    Click to expand...
    Click to shrink...

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.
     

    Horns
    Member

    Dec 7, 2018

    3,423

    I hope Microsoft drops the requirement for Series S by the time this comes out.
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    PLASTICA-MAN said:

    There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too.

    Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced.
    UE5 can still trace shadows if they want to push things even further.
    Click to expand...
    Click to shrink...

    Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S. 

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    Spoit said:

    And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back

    Click to expand...
    Click to shrink...

    Has it been confirmed that Sony is going to have release requirements like the XS?
     

    Commander Shepherd
    Member

    Jan 27, 2023

    173

    Anyone remember when no load screens was talked about for Witcher 3?
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    No this is probably different than most game are doing it here the main focus is the 60 fps mode and after they can create a balancedand 30 fps mode.

    This is not the other way around. 

    stanman
    Member

    Feb 13, 2025

    235

    defaltoption said:

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.

    Click to expand...
    Click to shrink...

    And your mistake is comparing a PC graphics card to a console. 

    PLASTICA-MAN
    Member

    Oct 26, 2017

    29,563

    chris 1515 said:

    Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S.

    Click to expand...
    Click to shrink...

    Yes. I am sure Series S will have HW solution but probably at 30 FPS. that would be a miracle if they achieve 60 FPS. 

    ArchedThunder
    Uncle Beerus
    Member

    Oct 25, 2017

    21,278

    chris 1515 said:

    It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version.

    EDIT: And they probably need to redo all the assets.

    /

    Fortnite doesn't use Nanite and Lumen on Switch 2.
    Click to expand...
    Click to shrink...

    Fortnite not using Lumen or Nanite at launch doesn't mean they can't run well on Switch 2. It's a launch port and they prioritized clean IQ and 60fps. I wouldn't be surprised to see them added later. Also it's not like the ray tracing in a Witcher 3 port has to match PS5, there's a lot of scaling back that can be done with ray tracing without ripping out the kitchen sink. Software lumen is also likely to be an option on P.
     

    jroc74
    Member

    Oct 27, 2017

    34,465

    Interesting times ahead....

    bitcloudrzr said:

    Has it been confirmed that Sony is going to have release requirements like the XS?

    Click to expand...
    Click to shrink...

    Your know good n well everything about this rumor has been confirmed.

    /S 

    Derbel McDillet
    ▲ Legend ▲
    Member

    Nov 23, 2022

    25,250

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    How does this sound like a Cyberpunk issue? They didn't say they can't get it to work on the S.
     

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    stanman said:

    And your mistake is comparing a PC graphics card to a console.

    Click to expand...
    Click to shrink...

     

    reksveks
    Member

    May 17, 2022

    7,628

    Horns said:

    I hope Microsoft drops the requirement for Series S by the time this comes out.

    Click to expand...
    Click to shrink...

    why? dev can make it 30 fps on series s and 60 fps on series x if needed.

    if they aren't or don't have to drop it for gta vi, they probably ain't dropping it for tw4. 

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    defaltoption said:

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.

    Click to expand...
    Click to shrink...

    No the consoles won't hold back your 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version?

    If the game was made with software lumen as the base it would have holding back your 5090...

    Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general. 

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    jroc74 said:

    Interesting times ahead....

    Your know good n well everything about this rumor has been confirmed.

    /S
    Click to expand...
    Click to shrink...

    Sony is like the opposite of a platform holder "forcing" adoption, for better or worse.
     

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    chris 1515 said:

    No the consoles won't hold back yout 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version?

    If the game was made with software lumen as the base it would have holding back your 5090...

    Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general.
    Click to expand...
    Click to shrink...

    Exactly, the series s is not a "mistake" or holding any version of the game on console or even PC back, that's what I'm saying to the person I replied to, its stupid to say that.
     

    cursed beef
    Member

    Jan 3, 2021

    998

    Have to imagine MS will lift the Series S parity clause when the next consoles launch. Which will be before/around the time W4 hits, right?
     

    Alvis
    Saw the truth behind the copied door
    Member

    Oct 25, 2017

    12,270

    EU

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    ? they said that 60 FPS on Series S is challenging, not the act of releasing the game there at all. The game can simply run at 30 FPS on Series S if they can't pull off 60 FPS. Or have a 40 FPS mode in lieu of 60 FPS.

    The CPU and storage speed differences between last gen and current gen were gigantic. This isn't even remotely close to a comparable situation. 

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    misqoute post
     

    jroc74
    Member

    Oct 27, 2017

    34,465

    defaltoption said:

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.

    Click to expand...
    Click to shrink...

    Ah yes, clearly 5090 cards are the vast majority of the minimum requirements for PC games.

    How can anyone say this with a straight face anymore when there are now PC games running on a Steam Deck.

    At least ppl saying that about the Series S are comparing it to other consoles.

    That said, it is interesting they are focusing on consoles first, then PC. 
    #projekt #red #tw4 #has #console
    CD Projekt RED: TW4 has console first development with a 60fps target; 60fps on Series S will be "extremely challenging"
    DriftingSpirit Member Oct 25, 2017 18,563 They note how they usually start with PC and scale down, but they will be doing it the other way around this time to avoid issues with the console versions. 4:15 for console focus and 60fps 38:50 for the Series S comment  bsigg Member Oct 25, 2017 25,153Inside The Witcher 4 Unreal Engine 5 Tech Demo: CD Projekt RED + Epic Deep Dive Interview www.resetera.com   Skot Member Oct 30, 2017 645 720p on Series S incoming   Bulby Prophet of Truth Member Oct 29, 2017 6,006 Berlin I think think any series s user will be happy with a beautiful 900p 30fps   Chronos Member Oct 27, 2017 1,249 This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.   HellofaMouse Member Oct 27, 2017 8,551 i wonder if this'll come out before the gen is over? good chance itll be a 2077 situation, cross-gen release with a broken ps6 version  logash Member Oct 27, 2017 6,526 This makes sense since they want to have good performance on lower end machines and they mentioned that it was easier to scale up than to scale down. They also mentioned their legacy on PC and how they plan on scaling it up high like they usually do on PC.   KRT Member Aug 7, 2020 247 Series S was a mistake   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.   bitcloudrzr Member May 31, 2018 21,044 Bulby said: I think think any series s user will be happy with a beautiful 900p 30fps Click to expand... Click to shrink...   Yuuber Member Oct 28, 2017 4,540 KRT said: Series S was a mistake Click to expand... Click to shrink... Can we stop with these stupid takes? For all we know it sold as much as Series X, helped several games have better optimization on bigger consoles and it will definitely help optimizing newer games to the Nintendo Switch 2.  MANTRA Member Feb 21, 2024 1,198 No one who cares about 60fps should be buying a Series S, just make it 30fps.   Roytheone Member Oct 25, 2017 6,185 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... They can just go for 30 fps instead on the Series S. No need for a special deal for that, that's allowed.  Matterhorn Member Feb 6, 2019 254 United States Hoping for a very nice looking 30fps Switch 2 version.   Universal Acclaim Member Oct 5, 2024 2,617 Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the game can't be scaled down to 720-900p/60fps?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain Matterhorn said: Hoping for a very nice looking 30fps Switch 2 version. Click to expand... Click to shrink... It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. / Fortnite doesn't use Nanite and Lumen on Switch 2.  Last edited: Yesterday at 4:18 PM bitcloudrzr Member May 31, 2018 21,044 Universal Acclaim said: Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the graphics can't be scaled down to 720p/60fps? Click to expand... Click to shrink... Graphics are the part of the game that can be scaled, it is CPU load that is the more difficult part, although devs have actually made cuts in the latter to increase performance mode fps viability. Even with this focus on 60fps performance modes, they are always going to have room to make a higher fidelity 30fps mode. Specifically with UE5 though, performance has been such a disaster all around and Epic seems to be taking it seriously now.   Greywaren Member Jul 16, 2019 13,530 Spain 60 fps target is fantastic, I wish it was the norm.   julia crawford Took the red AND the blue pills Member Oct 27, 2017 40,709 i am very ok with lower fps on the series s, it is far more palatable than severe resolution drops with upscaling artifacts.   Spoit Member Oct 28, 2017 5,599 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back   PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S. Click to expand... Click to shrink... There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further.  overthewaves Member Sep 30, 2020 1,203 What about the PS5 handheld?   nullpotential Member Jun 24, 2024 87 KRT said: Series S was a mistake Click to expand... Click to shrink... Consoles were a mistake.  GPU Member Oct 10, 2024 1,075 I really dont think Series S/X will be much of a factor by the time this game comes out.   Lashley <<Tag Here>> Member Oct 25, 2017 65,679 Just make series s 480p 30fps   pappacone Member Jan 10, 2020 4,076 Greywaren said: 60 fps target is fantastic, I wish it was the norm. Click to expand... Click to shrink... It pretty much is   Super Studied the Buster Sword Member Jan 29, 2022 13,601 I hope they can pull 60 FPS off in the full game.   Theorry Member Oct 27, 2017 69,045 "target" Uh huh. We know how that is gonna go.  Jakartalado Member Oct 27, 2017 2,818 São Paulo, Brazil Skot said: 720p on Series S incoming Click to expand... Click to shrink... If the PS5 is internally at 720p up to 900p, I seriously doubt that.  Revoltoftheunique Member Jan 23, 2022 2,312 It will be unstable 60fps with lots of stuttering.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin KRT said: Series S was a mistake Click to expand... Click to shrink... With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.   Horns Member Dec 7, 2018 3,423 I hope Microsoft drops the requirement for Series S by the time this comes out.   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain PLASTICA-MAN said: There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further. Click to expand... Click to shrink... Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S.  bitcloudrzr Member May 31, 2018 21,044 Spoit said: And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back Click to expand... Click to shrink... Has it been confirmed that Sony is going to have release requirements like the XS?   Commander Shepherd Member Jan 27, 2023 173 Anyone remember when no load screens was talked about for Witcher 3?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain No this is probably different than most game are doing it here the main focus is the 60 fps mode and after they can create a balancedand 30 fps mode. This is not the other way around.  stanman Member Feb 13, 2025 235 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... And your mistake is comparing a PC graphics card to a console.  PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S. Click to expand... Click to shrink... Yes. I am sure Series S will have HW solution but probably at 30 FPS. that would be a miracle if they achieve 60 FPS.  ArchedThunder Uncle Beerus Member Oct 25, 2017 21,278 chris 1515 said: It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. / Fortnite doesn't use Nanite and Lumen on Switch 2. Click to expand... Click to shrink... Fortnite not using Lumen or Nanite at launch doesn't mean they can't run well on Switch 2. It's a launch port and they prioritized clean IQ and 60fps. I wouldn't be surprised to see them added later. Also it's not like the ray tracing in a Witcher 3 port has to match PS5, there's a lot of scaling back that can be done with ray tracing without ripping out the kitchen sink. Software lumen is also likely to be an option on P.   jroc74 Member Oct 27, 2017 34,465 Interesting times ahead.... bitcloudrzr said: Has it been confirmed that Sony is going to have release requirements like the XS? Click to expand... Click to shrink... Your know good n well everything about this rumor has been confirmed. /S  Derbel McDillet ▲ Legend ▲ Member Nov 23, 2022 25,250 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... How does this sound like a Cyberpunk issue? They didn't say they can't get it to work on the S.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin stanman said: And your mistake is comparing a PC graphics card to a console. Click to expand... Click to shrink...   reksveks Member May 17, 2022 7,628 Horns said: I hope Microsoft drops the requirement for Series S by the time this comes out. Click to expand... Click to shrink... why? dev can make it 30 fps on series s and 60 fps on series x if needed. if they aren't or don't have to drop it for gta vi, they probably ain't dropping it for tw4.  chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... No the consoles won't hold back your 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general.  bitcloudrzr Member May 31, 2018 21,044 jroc74 said: Interesting times ahead.... Your know good n well everything about this rumor has been confirmed. /S Click to expand... Click to shrink... Sony is like the opposite of a platform holder "forcing" adoption, for better or worse.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin chris 1515 said: No the consoles won't hold back yout 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general. Click to expand... Click to shrink... Exactly, the series s is not a "mistake" or holding any version of the game on console or even PC back, that's what I'm saying to the person I replied to, its stupid to say that.   cursed beef Member Jan 3, 2021 998 Have to imagine MS will lift the Series S parity clause when the next consoles launch. Which will be before/around the time W4 hits, right?   Alvis Saw the truth behind the copied door Member Oct 25, 2017 12,270 EU Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... ? they said that 60 FPS on Series S is challenging, not the act of releasing the game there at all. The game can simply run at 30 FPS on Series S if they can't pull off 60 FPS. Or have a 40 FPS mode in lieu of 60 FPS. The CPU and storage speed differences between last gen and current gen were gigantic. This isn't even remotely close to a comparable situation.  defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin misqoute post   jroc74 Member Oct 27, 2017 34,465 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... Ah yes, clearly 5090 cards are the vast majority of the minimum requirements for PC games. How can anyone say this with a straight face anymore when there are now PC games running on a Steam Deck. At least ppl saying that about the Series S are comparing it to other consoles. That said, it is interesting they are focusing on consoles first, then PC.  #projekt #red #tw4 #has #console
    WWW.RESETERA.COM
    CD Projekt RED: TW4 has console first development with a 60fps target; 60fps on Series S will be "extremely challenging"
    DriftingSpirit Member Oct 25, 2017 18,563 They note how they usually start with PC and scale down, but they will be doing it the other way around this time to avoid issues with the console versions. 4:15 for console focus and 60fps 38:50 for the Series S comment  bsigg Member Oct 25, 2017 25,153 [DF] Inside The Witcher 4 Unreal Engine 5 Tech Demo: CD Projekt RED + Epic Deep Dive Interview https://www.youtube.com/watch?v=OplYN2MMI4Q www.resetera.com   Skot Member Oct 30, 2017 645 720p on Series S incoming   Bulby Prophet of Truth Member Oct 29, 2017 6,006 Berlin I think think any series s user will be happy with a beautiful 900p 30fps   Chronos Member Oct 27, 2017 1,249 This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.   HellofaMouse Member Oct 27, 2017 8,551 i wonder if this'll come out before the gen is over? good chance itll be a 2077 situation, cross-gen release with a broken ps6 version  logash Member Oct 27, 2017 6,526 This makes sense since they want to have good performance on lower end machines and they mentioned that it was easier to scale up than to scale down. They also mentioned their legacy on PC and how they plan on scaling it up high like they usually do on PC.   KRT Member Aug 7, 2020 247 Series S was a mistake   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.   bitcloudrzr Member May 31, 2018 21,044 Bulby said: I think think any series s user will be happy with a beautiful 900p 30fps Click to expand... Click to shrink...   Yuuber Member Oct 28, 2017 4,540 KRT said: Series S was a mistake Click to expand... Click to shrink... Can we stop with these stupid takes? For all we know it sold as much as Series X, helped several games have better optimization on bigger consoles and it will definitely help optimizing newer games to the Nintendo Switch 2.  MANTRA Member Feb 21, 2024 1,198 No one who cares about 60fps should be buying a Series S, just make it 30fps.   Roytheone Member Oct 25, 2017 6,185 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... They can just go for 30 fps instead on the Series S. No need for a special deal for that, that's allowed.  Matterhorn Member Feb 6, 2019 254 United States Hoping for a very nice looking 30fps Switch 2 version.   Universal Acclaim Member Oct 5, 2024 2,617 Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the game can't be scaled down to 720-900p/60fps?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain Matterhorn said: Hoping for a very nice looking 30fps Switch 2 version. Click to expand... Click to shrink... It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. https://www.reddit.com/r/FortNiteBR/comments/1l4a1o4/fortnite_on_the_switch_2_looks_great_these_low/ Fortnite doesn't use Nanite and Lumen on Switch 2.  Last edited: Yesterday at 4:18 PM bitcloudrzr Member May 31, 2018 21,044 Universal Acclaim said: Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the graphics can't be scaled down to 720p/60fps? Click to expand... Click to shrink... Graphics are the part of the game that can be scaled, it is CPU load that is the more difficult part, although devs have actually made cuts in the latter to increase performance mode fps viability. Even with this focus on 60fps performance modes, they are always going to have room to make a higher fidelity 30fps mode. Specifically with UE5 though, performance has been such a disaster all around and Epic seems to be taking it seriously now.   Greywaren Member Jul 16, 2019 13,530 Spain 60 fps target is fantastic, I wish it was the norm.   julia crawford Took the red AND the blue pills Member Oct 27, 2017 40,709 i am very ok with lower fps on the series s, it is far more palatable than severe resolution drops with upscaling artifacts.   Spoit Member Oct 28, 2017 5,599 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back   PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S. Click to expand... Click to shrink... There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further.  overthewaves Member Sep 30, 2020 1,203 What about the PS5 handheld?   nullpotential Member Jun 24, 2024 87 KRT said: Series S was a mistake Click to expand... Click to shrink... Consoles were a mistake.  GPU Member Oct 10, 2024 1,075 I really dont think Series S/X will be much of a factor by the time this game comes out.   Lashley <<Tag Here>> Member Oct 25, 2017 65,679 Just make series s 480p 30fps   pappacone Member Jan 10, 2020 4,076 Greywaren said: 60 fps target is fantastic, I wish it was the norm. Click to expand... Click to shrink... It pretty much is   Super Studied the Buster Sword Member Jan 29, 2022 13,601 I hope they can pull 60 FPS off in the full game.   Theorry Member Oct 27, 2017 69,045 "target" Uh huh. We know how that is gonna go.  Jakartalado Member Oct 27, 2017 2,818 São Paulo, Brazil Skot said: 720p on Series S incoming Click to expand... Click to shrink... If the PS5 is internally at 720p up to 900p, I seriously doubt that.  Revoltoftheunique Member Jan 23, 2022 2,312 It will be unstable 60fps with lots of stuttering.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin KRT said: Series S was a mistake Click to expand... Click to shrink... With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.   Horns Member Dec 7, 2018 3,423 I hope Microsoft drops the requirement for Series S by the time this comes out.   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain PLASTICA-MAN said: There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further. Click to expand... Click to shrink... Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S.  bitcloudrzr Member May 31, 2018 21,044 Spoit said: And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back Click to expand... Click to shrink... Has it been confirmed that Sony is going to have release requirements like the XS?   Commander Shepherd Member Jan 27, 2023 173 Anyone remember when no load screens was talked about for Witcher 3?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain No this is probably different than most game are doing it here the main focus is the 60 fps mode and after they can create a balanced(40 fps) and 30 fps mode. This is not the other way around.  stanman Member Feb 13, 2025 235 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... And your mistake is comparing a PC graphics card to a console.  PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S. Click to expand... Click to shrink... Yes. I am sure Series S will have HW solution but probably at 30 FPS. that would be a miracle if they achieve 60 FPS.  ArchedThunder Uncle Beerus Member Oct 25, 2017 21,278 chris 1515 said: It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. https://www.reddit.com/r/FortNiteBR/comments/1l4a1o4/fortnite_on_the_switch_2_looks_great_these_low/ Fortnite doesn't use Nanite and Lumen on Switch 2. Click to expand... Click to shrink... Fortnite not using Lumen or Nanite at launch doesn't mean they can't run well on Switch 2. It's a launch port and they prioritized clean IQ and 60fps. I wouldn't be surprised to see them added later. Also it's not like the ray tracing in a Witcher 3 port has to match PS5, there's a lot of scaling back that can be done with ray tracing without ripping out the kitchen sink. Software lumen is also likely to be an option on P.   jroc74 Member Oct 27, 2017 34,465 Interesting times ahead.... bitcloudrzr said: Has it been confirmed that Sony is going to have release requirements like the XS? Click to expand... Click to shrink... Your know good n well everything about this rumor has been confirmed. /S  Derbel McDillet ▲ Legend ▲ Member Nov 23, 2022 25,250 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... How does this sound like a Cyberpunk issue? They didn't say they can't get it to work on the S.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin stanman said: And your mistake is comparing a PC graphics card to a console. Click to expand... Click to shrink...   reksveks Member May 17, 2022 7,628 Horns said: I hope Microsoft drops the requirement for Series S by the time this comes out. Click to expand... Click to shrink... why? dev can make it 30 fps on series s and 60 fps on series x if needed. if they aren't or don't have to drop it for gta vi, they probably ain't dropping it for tw4.  chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... No the consoles won't hold back your 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalight(direct raytraced shadows with tons of lighe source) and better raytracing settings in general.  bitcloudrzr Member May 31, 2018 21,044 jroc74 said: Interesting times ahead.... Your know good n well everything about this rumor has been confirmed. /S Click to expand... Click to shrink... Sony is like the opposite of a platform holder "forcing" adoption, for better or worse.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin chris 1515 said: No the consoles won't hold back yout 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalight(direct raytraced shadows) and better raytracing settings in general. Click to expand... Click to shrink... Exactly, the series s is not a "mistake" or holding any version of the game on console or even PC back, that's what I'm saying to the person I replied to, its stupid to say that.   cursed beef Member Jan 3, 2021 998 Have to imagine MS will lift the Series S parity clause when the next consoles launch. Which will be before/around the time W4 hits, right?   Alvis Saw the truth behind the copied door Member Oct 25, 2017 12,270 EU Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... ? they said that 60 FPS on Series S is challenging, not the act of releasing the game there at all. The game can simply run at 30 FPS on Series S if they can't pull off 60 FPS. Or have a 40 FPS mode in lieu of 60 FPS. The CPU and storage speed differences between last gen and current gen were gigantic. This isn't even remotely close to a comparable situation.  defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin misqoute post   jroc74 Member Oct 27, 2017 34,465 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... Ah yes, clearly 5090 cards are the vast majority of the minimum requirements for PC games. How can anyone say this with a straight face anymore when there are now PC games running on a Steam Deck. At least ppl saying that about the Series S are comparing it to other consoles. That said, it is interesting they are focusing on consoles first, then PC. 
    0 Σχόλια 0 Μοιράστηκε
  • Mirela Cialai Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential.
    That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success.
    In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers.
    You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI.
    Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.

     
    Mirela Cialai Q&A Interview
    1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience?

    Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives.

    This could be revenue growth, customer retention, market expansion, or operational efficiency.
    We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition.
    We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals.
    In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance.
    This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth.
    Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings.
    Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences.
    To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale.

    By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals.

    2. What steps did you take to ensure data accuracy?
    The data team was very diligent in ensuring that our data warehouse had accurate data.
    So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc.

    That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data.

    3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy?
    Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability.
    I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%.
    This data helps make a compelling case to stakeholders about the importance of prioritizing retention.
    Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth.
    This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives.

    By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy.

    4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement?
    Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach.
    The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives.
    I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse.
    Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows.
    Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities.

    Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape.

    5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for?
    I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels.
    Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns.
    Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns.
    Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability.

    If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs.

    6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap?
    Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes.
    Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact.
    Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert.

    By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success.

    7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives?
    To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success.
    Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value.
    Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities.
    Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth.
    By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs.

    In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability.

    In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first.
    8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you?
    Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability.
    We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success.
    To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams.

    To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together.

    9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like?
    A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine.
    In one word: PAPER. Here’s how it breaks down.

    Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals.
    Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps.
    Prioritize: initiatives based on impact, feasibility, and ROI potential.
    Execute: by implementing the roadmap in manageable phases.
    Refine: by continuously improving CRM performance and refining the roadmap.

    So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy.

    10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively?
    The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences.

    The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth.

    Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies.
    The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes.
    Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution.
    A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions.
    Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others.
    While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends.

    By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success.

    11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind?
    I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives.
    Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives.

    Another important lesson: The roadmap is only as effective as the data and systems it’s built upon.

    I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on.
    A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers.

    So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.

     

     
    This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #mirela #cialai #qampampa #customer #engagement
    Mirela Cialai Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential. That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success. In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers. You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI. Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.   Mirela Cialai Q&A Interview 1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience? Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives. This could be revenue growth, customer retention, market expansion, or operational efficiency. We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition. We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals. In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance. This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth. Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings. Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences. To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale. By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals. 2. What steps did you take to ensure data accuracy? The data team was very diligent in ensuring that our data warehouse had accurate data. So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc. That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data. 3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy? Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability. I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%. This data helps make a compelling case to stakeholders about the importance of prioritizing retention. Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth. This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives. By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy. 4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement? Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach. The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives. I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse. Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows. Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities. Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape. 5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for? I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels. Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns. Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns. Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability. If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs. 6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap? Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes. Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact. Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert. By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success. 7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives? To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success. Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value. Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities. Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth. By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs. In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability. In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first. 8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you? Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability. We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success. To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams. To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together. 9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like? A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine. In one word: PAPER. Here’s how it breaks down. Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals. Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps. Prioritize: initiatives based on impact, feasibility, and ROI potential. Execute: by implementing the roadmap in manageable phases. Refine: by continuously improving CRM performance and refining the roadmap. So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy. 10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively? The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences. The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth. Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies. The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes. Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution. A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions. Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others. While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends. By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success. 11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind? I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives. Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives. Another important lesson: The roadmap is only as effective as the data and systems it’s built upon. I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on. A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers. So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.     This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage. #mirela #cialai #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Mirela Cialai Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential. That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success. In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers. You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI. Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.   Mirela Cialai Q&A Interview 1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience? Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives. This could be revenue growth, customer retention, market expansion, or operational efficiency. We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition. We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals. In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance. This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth. Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings. Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences. To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale. By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals. 2. What steps did you take to ensure data accuracy? The data team was very diligent in ensuring that our data warehouse had accurate data. So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc. That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data. 3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy? Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability. I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%. This data helps make a compelling case to stakeholders about the importance of prioritizing retention. Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth. This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives. By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy. 4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement? Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach. The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives. I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse. Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows. Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities. Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape. 5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for? I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels. Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns. Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns. Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability. If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs. 6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap? Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes. Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact. Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert. By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success. 7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives? To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success. Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value. Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities. Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth. By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs. In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability. In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first. 8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you? Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability. We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success. To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams. To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together. 9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like? A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine. In one word: PAPER. Here’s how it breaks down. Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals. Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps. Prioritize: initiatives based on impact, feasibility, and ROI potential. Execute: by implementing the roadmap in manageable phases. Refine: by continuously improving CRM performance and refining the roadmap. So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy. 10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively? The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences. The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth. Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies. The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes. Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution. A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions. Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others. While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends. By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success. 11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind? I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives. Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives. Another important lesson: The roadmap is only as effective as the data and systems it’s built upon. I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on. A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers. So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.     This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    0 Σχόλια 0 Μοιράστηκε
  • Selection Sort Time Complexity: Best, Worst, and Average Cases

    Development and Testing 

    Rate this post

    Sorting is a basic task in programming. It arranges data in order. There are many sorting algorithms. Selection Sort is one of the simplest sorting methods. It is easy to understand and code. But it is not the fastest. In this guide, we will explain the Selection Sort Time Complexity. We will cover best, worst, and average cases.
    What Is Selection Sort?
    Selection Sort works by selecting the smallest element from the list. It places it in the correct position. It repeats this process for all elements. One by one, it moves the smallest values to the front.
    Let’s see an example:
    Input:Step 1: Smallest is 2 → swap with 5 →Step 2: Smallest in remaining is 3 → already correctStep 3: Smallest in remaining is 5 → swap with 8 →Now the list is sorted.How Selection Sort Works
    Selection Sort uses two loops. The outer loop moves one index at a time. The inner loop finds the smallest element. After each pass, the smallest value is moved to the front. The position is fixed. Selection Sort does not care if the list is sorted or not. It always does the same steps.
    Selection Sort Algorithm
    Here is the basic algorithm:

    Start from the first element
    Find the smallest in the rest of the list
    Swap it with the current element
    Repeat for each element

    This repeats until all elements are sorted.
    Selection Sort CodejavaCopyEditpublic class SelectionSort {
    public static void sort{
    int n = arr.length;
    for{
    int min = i;
    for{
    if{
    min = j;
    }
    }
    int temp = arr;
    arr= arr;
    arr= temp;
    }
    }
    }

    This code uses two loops. The outer loop runs n-1 times. The inner loop finds the minimum.
    Selection Sort Time Complexity
    Now let’s understand the main topic. Let’s analyze Selection Sort Time Complexity in three cases.
    1. Best Case
    Even if the array is already sorted, Selection Sort checks all elements. It keeps comparing and swapping.

    Time Complexity: OReason: Inner loop runs fully, regardless of the order
    Example Input:Even here, every comparison still happens. Only fewer swaps occur, but comparisons remain the same.
    2. Worst Case
    This happens when the array is in reverse order. But Selection Sort does not optimize for this.

    Time Complexity: OReason: Still needs full comparisons
    Example Input:Even in reverse, the steps are the same. It compares and finds the smallest element every time.
    3. Average Case
    This is when elements are randomly placed. It is the most common scenario in real-world problems.

    Time Complexity: OReason: Still compares each element in the inner loop
    Example Input:Selection Sort does not change behavior based on input order. So the complexity remains the same.
    Why Is It Always O?
    Selection Sort compares all pairs of elements. The number of comparisons does not change.
    Total comparisons = n ×/ 2
    That’s why the time complexity is always O.It does not reduce steps in any case. It does not take advantage of sorted elements.
    Space Complexity
    Selection Sort does not need extra space. It sorts in place.

    Space Complexity: OOnly a few variables are used
    No extra arrays or memory needed

    This is one good point of the Selection Sort.
    Comparison with Other Algorithms
    Let’s compare Selection Sort with other basic sorts:
    AlgorithmBest CaseAverage CaseWorst CaseSpaceSelection SortOOOOBubble SortOOOOInsertion SortOOOOMerge SortOOOOQuick SortOOOOAs you see, Selection Sort is slower than Merge Sort and Quick Sort.
    Advantages of Selection Sort

    Very simple and easy to understand
    Works well with small datasets
    Needs very little memory
    Good for learning purposes

    Disadvantages of Selection Sort

    Slow on large datasets
    Always takes the same time, even if sorted
    Not efficient for real-world use

    When to Use Selection Sort
    Use Selection Sort when:

    You are working with a very small dataset
    You want to teach or learn sorting logic
    You want stable, low-memory sorting

    Avoid it for:

    Large datasets
    Performance-sensitive programs

    Conclusion
    Selection Sort Time Complexity is simple to understand. But it is not efficient for big problems. It always takes Otime, no matter the case. That is the same for best, worst, and average inputs. Still, it is useful in some cases. It’s great for learning sorting basics. It uses very little memory. If you’re working with small arrays, Selection Sort is fine. For large data, use better algorithms. Understanding its time complexity helps you choose the right algorithm. Always pick the tool that fits your task.
    Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    #selection #sort #time #complexity #best
    Selection Sort Time Complexity: Best, Worst, and Average Cases
    Development and Testing  Rate this post Sorting is a basic task in programming. It arranges data in order. There are many sorting algorithms. Selection Sort is one of the simplest sorting methods. It is easy to understand and code. But it is not the fastest. In this guide, we will explain the Selection Sort Time Complexity. We will cover best, worst, and average cases. What Is Selection Sort? Selection Sort works by selecting the smallest element from the list. It places it in the correct position. It repeats this process for all elements. One by one, it moves the smallest values to the front. Let’s see an example: Input:Step 1: Smallest is 2 → swap with 5 →Step 2: Smallest in remaining is 3 → already correctStep 3: Smallest in remaining is 5 → swap with 8 →Now the list is sorted.How Selection Sort Works Selection Sort uses two loops. The outer loop moves one index at a time. The inner loop finds the smallest element. After each pass, the smallest value is moved to the front. The position is fixed. Selection Sort does not care if the list is sorted or not. It always does the same steps. Selection Sort Algorithm Here is the basic algorithm: Start from the first element Find the smallest in the rest of the list Swap it with the current element Repeat for each element This repeats until all elements are sorted. Selection Sort CodejavaCopyEditpublic class SelectionSort { public static void sort{ int n = arr.length; for{ int min = i; for{ if{ min = j; } } int temp = arr; arr= arr; arr= temp; } } } This code uses two loops. The outer loop runs n-1 times. The inner loop finds the minimum. Selection Sort Time Complexity Now let’s understand the main topic. Let’s analyze Selection Sort Time Complexity in three cases. 1. Best Case Even if the array is already sorted, Selection Sort checks all elements. It keeps comparing and swapping. Time Complexity: OReason: Inner loop runs fully, regardless of the order Example Input:Even here, every comparison still happens. Only fewer swaps occur, but comparisons remain the same. 2. Worst Case This happens when the array is in reverse order. But Selection Sort does not optimize for this. Time Complexity: OReason: Still needs full comparisons Example Input:Even in reverse, the steps are the same. It compares and finds the smallest element every time. 3. Average Case This is when elements are randomly placed. It is the most common scenario in real-world problems. Time Complexity: OReason: Still compares each element in the inner loop Example Input:Selection Sort does not change behavior based on input order. So the complexity remains the same. Why Is It Always O? Selection Sort compares all pairs of elements. The number of comparisons does not change. Total comparisons = n ×/ 2 That’s why the time complexity is always O.It does not reduce steps in any case. It does not take advantage of sorted elements. Space Complexity Selection Sort does not need extra space. It sorts in place. Space Complexity: OOnly a few variables are used No extra arrays or memory needed This is one good point of the Selection Sort. Comparison with Other Algorithms Let’s compare Selection Sort with other basic sorts: AlgorithmBest CaseAverage CaseWorst CaseSpaceSelection SortOOOOBubble SortOOOOInsertion SortOOOOMerge SortOOOOQuick SortOOOOAs you see, Selection Sort is slower than Merge Sort and Quick Sort. Advantages of Selection Sort Very simple and easy to understand Works well with small datasets Needs very little memory Good for learning purposes Disadvantages of Selection Sort Slow on large datasets Always takes the same time, even if sorted Not efficient for real-world use When to Use Selection Sort Use Selection Sort when: You are working with a very small dataset You want to teach or learn sorting logic You want stable, low-memory sorting Avoid it for: Large datasets Performance-sensitive programs Conclusion Selection Sort Time Complexity is simple to understand. But it is not efficient for big problems. It always takes Otime, no matter the case. That is the same for best, worst, and average inputs. Still, it is useful in some cases. It’s great for learning sorting basics. It uses very little memory. If you’re working with small arrays, Selection Sort is fine. For large data, use better algorithms. Understanding its time complexity helps you choose the right algorithm. Always pick the tool that fits your task. Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com #selection #sort #time #complexity #best
    TECHWORLDTIMES.COM
    Selection Sort Time Complexity: Best, Worst, and Average Cases
    Development and Testing  Rate this post Sorting is a basic task in programming. It arranges data in order. There are many sorting algorithms. Selection Sort is one of the simplest sorting methods. It is easy to understand and code. But it is not the fastest. In this guide, we will explain the Selection Sort Time Complexity. We will cover best, worst, and average cases. What Is Selection Sort? Selection Sort works by selecting the smallest element from the list. It places it in the correct position. It repeats this process for all elements. One by one, it moves the smallest values to the front. Let’s see an example: Input: [5, 3, 8, 2]Step 1: Smallest is 2 → swap with 5 → [2, 3, 8, 5]Step 2: Smallest in remaining is 3 → already correctStep 3: Smallest in remaining is 5 → swap with 8 → [2, 3, 5, 8] Now the list is sorted.How Selection Sort Works Selection Sort uses two loops. The outer loop moves one index at a time. The inner loop finds the smallest element. After each pass, the smallest value is moved to the front. The position is fixed. Selection Sort does not care if the list is sorted or not. It always does the same steps. Selection Sort Algorithm Here is the basic algorithm: Start from the first element Find the smallest in the rest of the list Swap it with the current element Repeat for each element This repeats until all elements are sorted. Selection Sort Code (Java Example) javaCopyEditpublic class SelectionSort { public static void sort(int[] arr) { int n = arr.length; for (int i = 0; i < n - 1; i++) { int min = i; for (int j = i + 1; j < n; j++) { if (arr[j] < arr[min]) { min = j; } } int temp = arr[min]; arr[min] = arr[i]; arr[i] = temp; } } } This code uses two loops. The outer loop runs n-1 times. The inner loop finds the minimum. Selection Sort Time Complexity Now let’s understand the main topic. Let’s analyze Selection Sort Time Complexity in three cases. 1. Best Case Even if the array is already sorted, Selection Sort checks all elements. It keeps comparing and swapping. Time Complexity: O(n²) Reason: Inner loop runs fully, regardless of the order Example Input: [1, 2, 3, 4, 5] Even here, every comparison still happens. Only fewer swaps occur, but comparisons remain the same. 2. Worst Case This happens when the array is in reverse order. But Selection Sort does not optimize for this. Time Complexity: O(n²) Reason: Still needs full comparisons Example Input: [5, 4, 3, 2, 1] Even in reverse, the steps are the same. It compares and finds the smallest element every time. 3. Average Case This is when elements are randomly placed. It is the most common scenario in real-world problems. Time Complexity: O(n²) Reason: Still compares each element in the inner loop Example Input: [3, 1, 4, 2, 5] Selection Sort does not change behavior based on input order. So the complexity remains the same. Why Is It Always O(n²)? Selection Sort compares all pairs of elements. The number of comparisons does not change. Total comparisons = n × (n – 1) / 2 That’s why the time complexity is always O(n²).It does not reduce steps in any case. It does not take advantage of sorted elements. Space Complexity Selection Sort does not need extra space. It sorts in place. Space Complexity: O(1) Only a few variables are used No extra arrays or memory needed This is one good point of the Selection Sort. Comparison with Other Algorithms Let’s compare Selection Sort with other basic sorts: AlgorithmBest CaseAverage CaseWorst CaseSpaceSelection SortO(n²)O(n²)O(n²)O(1)Bubble SortO(n)O(n²)O(n²)O(1)Insertion SortO(n)O(n²)O(n²)O(1)Merge SortO(n log n)O(n log n)O(n log n)O(n)Quick SortO(n log n)O(n log n)O(n²)O(log n) As you see, Selection Sort is slower than Merge Sort and Quick Sort. Advantages of Selection Sort Very simple and easy to understand Works well with small datasets Needs very little memory Good for learning purposes Disadvantages of Selection Sort Slow on large datasets Always takes the same time, even if sorted Not efficient for real-world use When to Use Selection Sort Use Selection Sort when: You are working with a very small dataset You want to teach or learn sorting logic You want stable, low-memory sorting Avoid it for: Large datasets Performance-sensitive programs Conclusion Selection Sort Time Complexity is simple to understand. But it is not efficient for big problems. It always takes O(n²) time, no matter the case. That is the same for best, worst, and average inputs. Still, it is useful in some cases. It’s great for learning sorting basics. It uses very little memory. If you’re working with small arrays, Selection Sort is fine. For large data, use better algorithms. Understanding its time complexity helps you choose the right algorithm. Always pick the tool that fits your task. Tech World TimesTech World Times (TWT), a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    0 Σχόλια 0 Μοιράστηκε
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Σχόλια 0 Μοιράστηκε
  • Microsoft trolls Apple's new Liquid Glass UI for looking like Windows Vista

    In a nutshell: The OS updates coming to Apple devices later this year will institute the company's first major UI design shift in over a decade, but eagle-eyed observers noticed similarities with an old version of Windows – comparisons that haven't escaped Microsoft's notice. Thankfully, users concerned about Apple's upcoming interface will have options to change its visual presentation.
    Some of Microsoft's social media accounts recently poked fun at the upcoming "Liquid Glass" user interface design language Apple unveiled at WWDC this week. Although the Cupertino giant has hailed the update as a major innovation, many immediately began comparing it to Microsoft's nearly two-decade-old Windows Vista UI.

     

     
     

     

    View this post on Instagram

     

     
     
     

     
     

     
     
     

     
     

    A post shared by WindowsLiquid Glass is Apple's name for the new visual style arriving in iOS 26, iPadOS 26, macOS 26 Tahoe, watchOS 26, and tvOS 26, which will launch this fall. Inspired by the Apple Vision Pro's visionOS, the design language favors rounded edges and transparent backgrounds for inputs and other UI functions.
    It is Apple's most significant design change since iOS 7 debuted almost 12 years ago, and the first to establish a unified language across all of the company's devices.
    On the left: nice Liquid Glass UI minimalistic look. On the right: Liquid Glass looking all kinds of wrong in the current beta.

    Apps, wallpapers, and other background content will be visible through app icons, notifications, and menu elements for a glass-like appearance. Apple claims that the effect will improve cohesion across the interface, but beta testers are concerned that text will become less readable.
    Others, including Microsoft, mocked the update's resemblance to Windows Vista's glass-like "Aero" aesthetic, which debuted in 2007. That OS also made UI elements partially transparent, but Microsoft eventually phased it out when it began moving toward its current design language.
    The official Windows Instagram account recently responded to Apple's presentation by posting a slideshow of Vista screenshots played over a nostalgic Windows boot tune. The Windows Twitter account also shared a picture recalling the Vista-era profile icons.
    Other social media users joined in on the fun. Some highlighted the unfortunate placement of the YouTube icon in Apple's Liquid Glass explainer video, which the company altered. Others compared the design language to the unique chassis for Apple's 2000 Power Mac G4 Cube and the main menu for Nintendo's 2012 Wii U game console.
    Fortunately, users can customize Liquid Glass by switching between transparent, light, and dark modes. They can also opt for a slightly more opaque presentation with a toggle located under Settings > Accessibility > Display & Text Size > Reduce Transparency.
    #microsoft #trolls #apple039s #new #liquid
    Microsoft trolls Apple's new Liquid Glass UI for looking like Windows Vista
    In a nutshell: The OS updates coming to Apple devices later this year will institute the company's first major UI design shift in over a decade, but eagle-eyed observers noticed similarities with an old version of Windows – comparisons that haven't escaped Microsoft's notice. Thankfully, users concerned about Apple's upcoming interface will have options to change its visual presentation. Some of Microsoft's social media accounts recently poked fun at the upcoming "Liquid Glass" user interface design language Apple unveiled at WWDC this week. Although the Cupertino giant has hailed the update as a major innovation, many immediately began comparing it to Microsoft's nearly two-decade-old Windows Vista UI.         View this post on Instagram                       A post shared by WindowsLiquid Glass is Apple's name for the new visual style arriving in iOS 26, iPadOS 26, macOS 26 Tahoe, watchOS 26, and tvOS 26, which will launch this fall. Inspired by the Apple Vision Pro's visionOS, the design language favors rounded edges and transparent backgrounds for inputs and other UI functions. It is Apple's most significant design change since iOS 7 debuted almost 12 years ago, and the first to establish a unified language across all of the company's devices. On the left: nice Liquid Glass UI minimalistic look. On the right: Liquid Glass looking all kinds of wrong in the current beta. Apps, wallpapers, and other background content will be visible through app icons, notifications, and menu elements for a glass-like appearance. Apple claims that the effect will improve cohesion across the interface, but beta testers are concerned that text will become less readable. Others, including Microsoft, mocked the update's resemblance to Windows Vista's glass-like "Aero" aesthetic, which debuted in 2007. That OS also made UI elements partially transparent, but Microsoft eventually phased it out when it began moving toward its current design language. The official Windows Instagram account recently responded to Apple's presentation by posting a slideshow of Vista screenshots played over a nostalgic Windows boot tune. The Windows Twitter account also shared a picture recalling the Vista-era profile icons. Other social media users joined in on the fun. Some highlighted the unfortunate placement of the YouTube icon in Apple's Liquid Glass explainer video, which the company altered. Others compared the design language to the unique chassis for Apple's 2000 Power Mac G4 Cube and the main menu for Nintendo's 2012 Wii U game console. Fortunately, users can customize Liquid Glass by switching between transparent, light, and dark modes. They can also opt for a slightly more opaque presentation with a toggle located under Settings > Accessibility > Display & Text Size > Reduce Transparency. #microsoft #trolls #apple039s #new #liquid
    WWW.TECHSPOT.COM
    Microsoft trolls Apple's new Liquid Glass UI for looking like Windows Vista
    In a nutshell: The OS updates coming to Apple devices later this year will institute the company's first major UI design shift in over a decade, but eagle-eyed observers noticed similarities with an old version of Windows – comparisons that haven't escaped Microsoft's notice. Thankfully, users concerned about Apple's upcoming interface will have options to change its visual presentation. Some of Microsoft's social media accounts recently poked fun at the upcoming "Liquid Glass" user interface design language Apple unveiled at WWDC this week. Although the Cupertino giant has hailed the update as a major innovation, many immediately began comparing it to Microsoft's nearly two-decade-old Windows Vista UI.         View this post on Instagram                       A post shared by Windows (@windows) Liquid Glass is Apple's name for the new visual style arriving in iOS 26, iPadOS 26, macOS 26 Tahoe, watchOS 26, and tvOS 26, which will launch this fall. Inspired by the Apple Vision Pro's visionOS, the design language favors rounded edges and transparent backgrounds for inputs and other UI functions. It is Apple's most significant design change since iOS 7 debuted almost 12 years ago, and the first to establish a unified language across all of the company's devices. On the left: nice Liquid Glass UI minimalistic look. On the right: Liquid Glass looking all kinds of wrong in the current beta. Apps, wallpapers, and other background content will be visible through app icons, notifications, and menu elements for a glass-like appearance. Apple claims that the effect will improve cohesion across the interface, but beta testers are concerned that text will become less readable. Others, including Microsoft, mocked the update's resemblance to Windows Vista's glass-like "Aero" aesthetic, which debuted in 2007. That OS also made UI elements partially transparent, but Microsoft eventually phased it out when it began moving toward its current design language. The official Windows Instagram account recently responded to Apple's presentation by posting a slideshow of Vista screenshots played over a nostalgic Windows boot tune. The Windows Twitter account also shared a picture recalling the Vista-era profile icons. Other social media users joined in on the fun. Some highlighted the unfortunate placement of the YouTube icon in Apple's Liquid Glass explainer video, which the company altered. Others compared the design language to the unique chassis for Apple's 2000 Power Mac G4 Cube and the main menu for Nintendo's 2012 Wii U game console. Fortunately, users can customize Liquid Glass by switching between transparent, light, and dark modes. They can also opt for a slightly more opaque presentation with a toggle located under Settings > Accessibility > Display & Text Size > Reduce Transparency.
    0 Σχόλια 0 Μοιράστηκε
  • Biofuels policy has been a failure for the climate, new report claims

    Fewer food crops

    Biofuels policy has been a failure for the climate, new report claims

    Report: An expansion of biofuels policy under Trump would lead to more greenhouse gas emissions.

    Georgina Gustin, Inside Climate News



    Jun 14, 2025 7:10 am

    |

    24

    An ethanol production plant on March 20, 2024 near Ravenna, Nebraska.

    Credit:

    David Madison/Getty Images

    An ethanol production plant on March 20, 2024 near Ravenna, Nebraska.

    Credit:

    David Madison/Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.
    The American Midwest is home to some of the richest, most productive farmland in the world, enabling its transformation into a vast corn- and soy-producing machine—a conversion spurred largely by decades-long policies that support the production of biofuels.
    But a new report takes a big swing at the ethanol orthodoxy of American agriculture, criticizing the industry for causing economic and social imbalances across rural communities and saying that the expansion of biofuels will increase greenhouse gas emissions, despite their purported climate benefits.
    The report, from the World Resources Institute, which has been critical of US biofuel policy in the past, draws from 100 academic studies on biofuel impacts. It concludes that ethanol policy has been largely a failure and ought to be reconsidered, especially as the world needs more land to produce food to meet growing demand.
    “Multiple studies show that US biofuel policies have reshaped crop production, displacing food crops and driving up emissions from land conversion, tillage, and fertilizer use,” said the report’s lead author, Haley Leslie-Bole. “Corn-based ethanol, in particular, has contributed to nutrient runoff, degraded water quality and harmed wildlife habitat. As climate pressures grow, increasing irrigation and refining for first-gen biofuels could deepen water scarcity in already drought-prone parts of the Midwest.”
    The conversion of Midwestern agricultural land has been sweeping. Between 2004 and 2024, ethanol production increased by nearly 500 percent. Corn and soybeans are now grown on 92 and 86 million acres of land respectively—and roughly a third of those crops go to produce ethanol. That means about 30 million acres of land that could be used to grow food crops are instead being used to produce ethanol, despite ethanol only accounting for 6 percent of the country’s transportation fuel.

    The biofuels industry—which includes refiners, corn and soy growers and the influential agriculture lobby writ large—has long insisted that corn- and soy-based biofuels provide an energy-efficient alternative to fossil-based fuels. Congress and the US Department of Agriculture have agreed.
    The country’s primary biofuels policy, the Renewable Fuel Standard, requires that biofuels provide a greenhouse gas reduction over fossil fuels: The law says that ethanol from new plants must deliver a 20 percent reduction in greenhouse gas emissions compared to gasoline.
    In addition to greenhouse gas reductions, the industry and its allies in Congress have also continued to say that ethanol is a primary mainstay of the rural economy, benefiting communities across the Midwest.
    But a growing body of research—much of which the industry has tried to debunk and deride—suggests that ethanol actually may not provide the benefits that policies require. It may, in fact, produce more greenhouse gases than the fossil fuels it was intended to replace. Recent research says that biofuel refiners also emit significant amounts of carcinogenic and dangerous substances, including hexane and formaldehyde, in greater amounts than petroleum refineries.
    The new report points to research saying that increased production of biofuels from corn and soy could actually raise greenhouse gas emissions, largely from carbon emissions linked to clearing land in other countries to compensate for the use of land in the Midwest.
    On top of that, corn is an especially fertilizer-hungry crop requiring large amounts of nitrogen-based fertilizer, which releases huge amounts of nitrous oxide when it interacts with the soil. American farming is, by far, the largest source of domestic nitrous oxide emissions already—about 50 percent. If biofuel policies lead to expanded production, emissions of this enormously powerful greenhouse gas will likely increase, too.

    The new report concludes that not only will the expansion of ethanol increase greenhouse gas emissions, but it has also failed to provide the social and financial benefits to Midwestern communities that lawmakers and the industry say it has.“The benefits from biofuels remain concentrated in the hands of a few,” Leslie-Bole said. “As subsidies flow, so may the trend of farmland consolidation, increasing inaccessibility of farmland in the Midwest, and locking out emerging or low-resource farmers. This means the benefits of biofuels production are flowing to fewer people, while more are left bearing the costs.”
    New policies being considered in state legislatures and Congress, including additional tax credits and support for biofuel-based aviation fuel, could expand production, potentially causing more land conversion and greenhouse gas emissions, widening the gap between the rural communities and rich agribusinesses at a time when food demand is climbing and, critics say, land should be used to grow food instead.
    President Donald Trump’s tax cut bill, passed by the House and currently being negotiated in the Senate, would not only extend tax credits for biofuels producers, it specifically excludes calculations of emissions from land conversion when determining what qualifies as a low-emission fuel.
    The primary biofuels industry trade groups, including Growth Energy and the Renewable Fuels Association, did not respond to Inside Climate News requests for comment or interviews.
    An employee with the Clean Fuels Alliance America, which represents biodiesel and sustainable aviation fuel producers, not ethanol, said the report vastly overstates the carbon emissions from crop-based fuels by comparing the farmed land to natural landscapes, which no longer exist.
    They also noted that the impact of soy-based fuels in 2024 was more than billion, providing over 100,000 jobs.
    “Ten percent of the value of every bushel of soybeans is linked to biomass-based fuel,” they said.

    Georgina Gustin, Inside Climate News

    24 Comments
    #biofuels #policy #has #been #failure
    Biofuels policy has been a failure for the climate, new report claims
    Fewer food crops Biofuels policy has been a failure for the climate, new report claims Report: An expansion of biofuels policy under Trump would lead to more greenhouse gas emissions. Georgina Gustin, Inside Climate News – Jun 14, 2025 7:10 am | 24 An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here. The American Midwest is home to some of the richest, most productive farmland in the world, enabling its transformation into a vast corn- and soy-producing machine—a conversion spurred largely by decades-long policies that support the production of biofuels. But a new report takes a big swing at the ethanol orthodoxy of American agriculture, criticizing the industry for causing economic and social imbalances across rural communities and saying that the expansion of biofuels will increase greenhouse gas emissions, despite their purported climate benefits. The report, from the World Resources Institute, which has been critical of US biofuel policy in the past, draws from 100 academic studies on biofuel impacts. It concludes that ethanol policy has been largely a failure and ought to be reconsidered, especially as the world needs more land to produce food to meet growing demand. “Multiple studies show that US biofuel policies have reshaped crop production, displacing food crops and driving up emissions from land conversion, tillage, and fertilizer use,” said the report’s lead author, Haley Leslie-Bole. “Corn-based ethanol, in particular, has contributed to nutrient runoff, degraded water quality and harmed wildlife habitat. As climate pressures grow, increasing irrigation and refining for first-gen biofuels could deepen water scarcity in already drought-prone parts of the Midwest.” The conversion of Midwestern agricultural land has been sweeping. Between 2004 and 2024, ethanol production increased by nearly 500 percent. Corn and soybeans are now grown on 92 and 86 million acres of land respectively—and roughly a third of those crops go to produce ethanol. That means about 30 million acres of land that could be used to grow food crops are instead being used to produce ethanol, despite ethanol only accounting for 6 percent of the country’s transportation fuel. The biofuels industry—which includes refiners, corn and soy growers and the influential agriculture lobby writ large—has long insisted that corn- and soy-based biofuels provide an energy-efficient alternative to fossil-based fuels. Congress and the US Department of Agriculture have agreed. The country’s primary biofuels policy, the Renewable Fuel Standard, requires that biofuels provide a greenhouse gas reduction over fossil fuels: The law says that ethanol from new plants must deliver a 20 percent reduction in greenhouse gas emissions compared to gasoline. In addition to greenhouse gas reductions, the industry and its allies in Congress have also continued to say that ethanol is a primary mainstay of the rural economy, benefiting communities across the Midwest. But a growing body of research—much of which the industry has tried to debunk and deride—suggests that ethanol actually may not provide the benefits that policies require. It may, in fact, produce more greenhouse gases than the fossil fuels it was intended to replace. Recent research says that biofuel refiners also emit significant amounts of carcinogenic and dangerous substances, including hexane and formaldehyde, in greater amounts than petroleum refineries. The new report points to research saying that increased production of biofuels from corn and soy could actually raise greenhouse gas emissions, largely from carbon emissions linked to clearing land in other countries to compensate for the use of land in the Midwest. On top of that, corn is an especially fertilizer-hungry crop requiring large amounts of nitrogen-based fertilizer, which releases huge amounts of nitrous oxide when it interacts with the soil. American farming is, by far, the largest source of domestic nitrous oxide emissions already—about 50 percent. If biofuel policies lead to expanded production, emissions of this enormously powerful greenhouse gas will likely increase, too. The new report concludes that not only will the expansion of ethanol increase greenhouse gas emissions, but it has also failed to provide the social and financial benefits to Midwestern communities that lawmakers and the industry say it has.“The benefits from biofuels remain concentrated in the hands of a few,” Leslie-Bole said. “As subsidies flow, so may the trend of farmland consolidation, increasing inaccessibility of farmland in the Midwest, and locking out emerging or low-resource farmers. This means the benefits of biofuels production are flowing to fewer people, while more are left bearing the costs.” New policies being considered in state legislatures and Congress, including additional tax credits and support for biofuel-based aviation fuel, could expand production, potentially causing more land conversion and greenhouse gas emissions, widening the gap between the rural communities and rich agribusinesses at a time when food demand is climbing and, critics say, land should be used to grow food instead. President Donald Trump’s tax cut bill, passed by the House and currently being negotiated in the Senate, would not only extend tax credits for biofuels producers, it specifically excludes calculations of emissions from land conversion when determining what qualifies as a low-emission fuel. The primary biofuels industry trade groups, including Growth Energy and the Renewable Fuels Association, did not respond to Inside Climate News requests for comment or interviews. An employee with the Clean Fuels Alliance America, which represents biodiesel and sustainable aviation fuel producers, not ethanol, said the report vastly overstates the carbon emissions from crop-based fuels by comparing the farmed land to natural landscapes, which no longer exist. They also noted that the impact of soy-based fuels in 2024 was more than billion, providing over 100,000 jobs. “Ten percent of the value of every bushel of soybeans is linked to biomass-based fuel,” they said. Georgina Gustin, Inside Climate News 24 Comments #biofuels #policy #has #been #failure
    ARSTECHNICA.COM
    Biofuels policy has been a failure for the climate, new report claims
    Fewer food crops Biofuels policy has been a failure for the climate, new report claims Report: An expansion of biofuels policy under Trump would lead to more greenhouse gas emissions. Georgina Gustin, Inside Climate News – Jun 14, 2025 7:10 am | 24 An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here. The American Midwest is home to some of the richest, most productive farmland in the world, enabling its transformation into a vast corn- and soy-producing machine—a conversion spurred largely by decades-long policies that support the production of biofuels. But a new report takes a big swing at the ethanol orthodoxy of American agriculture, criticizing the industry for causing economic and social imbalances across rural communities and saying that the expansion of biofuels will increase greenhouse gas emissions, despite their purported climate benefits. The report, from the World Resources Institute, which has been critical of US biofuel policy in the past, draws from 100 academic studies on biofuel impacts. It concludes that ethanol policy has been largely a failure and ought to be reconsidered, especially as the world needs more land to produce food to meet growing demand. “Multiple studies show that US biofuel policies have reshaped crop production, displacing food crops and driving up emissions from land conversion, tillage, and fertilizer use,” said the report’s lead author, Haley Leslie-Bole. “Corn-based ethanol, in particular, has contributed to nutrient runoff, degraded water quality and harmed wildlife habitat. As climate pressures grow, increasing irrigation and refining for first-gen biofuels could deepen water scarcity in already drought-prone parts of the Midwest.” The conversion of Midwestern agricultural land has been sweeping. Between 2004 and 2024, ethanol production increased by nearly 500 percent. Corn and soybeans are now grown on 92 and 86 million acres of land respectively—and roughly a third of those crops go to produce ethanol. That means about 30 million acres of land that could be used to grow food crops are instead being used to produce ethanol, despite ethanol only accounting for 6 percent of the country’s transportation fuel. The biofuels industry—which includes refiners, corn and soy growers and the influential agriculture lobby writ large—has long insisted that corn- and soy-based biofuels provide an energy-efficient alternative to fossil-based fuels. Congress and the US Department of Agriculture have agreed. The country’s primary biofuels policy, the Renewable Fuel Standard, requires that biofuels provide a greenhouse gas reduction over fossil fuels: The law says that ethanol from new plants must deliver a 20 percent reduction in greenhouse gas emissions compared to gasoline. In addition to greenhouse gas reductions, the industry and its allies in Congress have also continued to say that ethanol is a primary mainstay of the rural economy, benefiting communities across the Midwest. But a growing body of research—much of which the industry has tried to debunk and deride—suggests that ethanol actually may not provide the benefits that policies require. It may, in fact, produce more greenhouse gases than the fossil fuels it was intended to replace. Recent research says that biofuel refiners also emit significant amounts of carcinogenic and dangerous substances, including hexane and formaldehyde, in greater amounts than petroleum refineries. The new report points to research saying that increased production of biofuels from corn and soy could actually raise greenhouse gas emissions, largely from carbon emissions linked to clearing land in other countries to compensate for the use of land in the Midwest. On top of that, corn is an especially fertilizer-hungry crop requiring large amounts of nitrogen-based fertilizer, which releases huge amounts of nitrous oxide when it interacts with the soil. American farming is, by far, the largest source of domestic nitrous oxide emissions already—about 50 percent. If biofuel policies lead to expanded production, emissions of this enormously powerful greenhouse gas will likely increase, too. The new report concludes that not only will the expansion of ethanol increase greenhouse gas emissions, but it has also failed to provide the social and financial benefits to Midwestern communities that lawmakers and the industry say it has. (The report defines the Midwest as Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin.) “The benefits from biofuels remain concentrated in the hands of a few,” Leslie-Bole said. “As subsidies flow, so may the trend of farmland consolidation, increasing inaccessibility of farmland in the Midwest, and locking out emerging or low-resource farmers. This means the benefits of biofuels production are flowing to fewer people, while more are left bearing the costs.” New policies being considered in state legislatures and Congress, including additional tax credits and support for biofuel-based aviation fuel, could expand production, potentially causing more land conversion and greenhouse gas emissions, widening the gap between the rural communities and rich agribusinesses at a time when food demand is climbing and, critics say, land should be used to grow food instead. President Donald Trump’s tax cut bill, passed by the House and currently being negotiated in the Senate, would not only extend tax credits for biofuels producers, it specifically excludes calculations of emissions from land conversion when determining what qualifies as a low-emission fuel. The primary biofuels industry trade groups, including Growth Energy and the Renewable Fuels Association, did not respond to Inside Climate News requests for comment or interviews. An employee with the Clean Fuels Alliance America, which represents biodiesel and sustainable aviation fuel producers, not ethanol, said the report vastly overstates the carbon emissions from crop-based fuels by comparing the farmed land to natural landscapes, which no longer exist. They also noted that the impact of soy-based fuels in 2024 was more than $42 billion, providing over 100,000 jobs. “Ten percent of the value of every bushel of soybeans is linked to biomass-based fuel,” they said. Georgina Gustin, Inside Climate News 24 Comments
    0 Σχόλια 0 Μοιράστηκε
  • UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge

    Researchers from the US-based University of Massachusetts Amherst, in collaboration with the Massachusetts Institute of TechnologyDepartment of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques.
    “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added.
    The pilot project is also a collaboration with the Massachusetts Department of Transportation, the Massachusetts Technology Collaborative, the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration.
    Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst.
    Tackling America’s Bridge Crisis with Cold Spray Technology
    Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed billion—well beyond current funding levels. 
    The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge.
    One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us toon this actual bridge while cars are going.”
    To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan.
    Next steps: Testing Cold-Sprayed Repairs
    The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests.
    “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.”
    3D Printing for Infrastructure Repairs
    Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College Londonhave developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained.
    Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters.
    Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now.
    Who won the2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news.
    You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.
    Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst.
    #umass #mit #test #cold #spray
    UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge
    Researchers from the US-based University of Massachusetts Amherst, in collaboration with the Massachusetts Institute of TechnologyDepartment of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques. “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added. The pilot project is also a collaboration with the Massachusetts Department of Transportation, the Massachusetts Technology Collaborative, the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration. Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst. Tackling America’s Bridge Crisis with Cold Spray Technology Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed billion—well beyond current funding levels.  The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge. One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us toon this actual bridge while cars are going.” To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan. Next steps: Testing Cold-Sprayed Repairs The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests. “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.” 3D Printing for Infrastructure Repairs Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College Londonhave developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained. Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters. Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst. #umass #mit #test #cold #spray
    3DPRINTINGINDUSTRY.COM
    UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge
    Researchers from the US-based University of Massachusetts Amherst (UMass), in collaboration with the Massachusetts Institute of Technology (MIT) Department of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques. “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added. The pilot project is also a collaboration with the Massachusetts Department of Transportation (MassDOT), the Massachusetts Technology Collaborative (MassTech), the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration. Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis (left, standing). Photo via UMass Amherst. Tackling America’s Bridge Crisis with Cold Spray Technology Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed $190 billion—well beyond current funding levels.  The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge. One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us to [apply the technique] on this actual bridge while cars are going [across].” To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan. Next steps: Testing Cold-Sprayed Repairs The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests. “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.” 3D Printing for Infrastructure Repairs Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College London (UCL) have developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained. Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters. Join our Additive Manufacturing Advantage (AMAA) event on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis (left, standing). Photo via UMass Amherst.
    0 Σχόλια 0 Μοιράστηκε
Αναζήτηση αποτελεσμάτων