• New Imaging Technique Makes the Sun Look Like a Swirling Pink Liquid

    A swirling sea of pink, where fluffy tufts float majestically upward, while elsewhere violet plumes rain down from above. This is the Sun as seen in groundbreaking new images — and they're unlike anything you've ever laid eyes on.As detailed in a new study published in the journal Nature Astronomy, scientists have leveraged new coronal adaptive optics tech to bypass the blurriness caused by the turbulence of the Earth's atmosphere, a time-old obstacle that's frustrated astronomers' attempts to see features on our home star at a resolution better than 620 miles. Now, they've gotten it down to just under 40 miles — a light year sized leap.The result is some of the clearest images to date of the fine structures that make up the Sun's formidable corona, the outermost layer of its atmosphere known for its unbelievable temperatures and violent, unpredictable outbursts.The authors are optimistic that their blur-bypassing techniques will be a game-changer."These are by far the most detailed observations of this kind, showing features not previously observed, and it's not quite clear what they are," coauthor Vasyl Yurchyshyn, a research professor at the New Jersey Institute of Technology's Center for Terrestrial Research, said in a statement about the work."It is super exciting to build an instrument that shows us the Sun like never before," echoed lead author Dirk Schmidt, an adaptive optics scientist at the US National Solar Observatory.Stretching for millions of miles into space, the corona is the staging ground for the Sun's violent outbursts, which range from solar storms, to solar flares, to coronal mass ejections. One reason scientists are interested in these phenomena is because they continue to batter our own planet's atmosphere, playing a significant role in the Earth's climate and wreaking havoc on our electronics. Then, at a reach totally beyond our very limited human purview, is the corona's mighty solar wind, which sweeps across the entire solar system, shielding it from cosmic rays.But astronomers are still trying to understand how these solar phenomena occur. One abiding mystery is why the corona can reach temperatures in the millions of degrees Fahrenheit, when the Sun's surface it sits thousands of miles above is no more than a relatively cool 10,000 degrees. The conundrum even has a name: the coronal heating problem.The level of detailed captured in the latest images, taken with an adaptive optics system installed on the Goode Solar Telescope at the CSTR, could be transformative in probing these mysteries.One type of feature the unprecedented resolution revealed were solar prominences, which are large, flashy structures that protrude from the sun's surface, found in twisty shapes like arches or loops. A spectacular video shows a solar prominence swirling like a tortured water spout as it's whipped around by the sun's magnetic field.Most awe-inspiring of all are the examples of what's known as coronal rain. Appearing like waterfalls suspended in midair, the phenomenon is caused as plasma cools and condenses into huge globs before crashing down to the sun's surface. These were imaged at a scale smaller than 100 kilometers, or about 62 miles. In solar terms, that's pinpoint accuracy."With coronal adaptive optics now in operation, this marks the beginning of a new era in solar physics, promising many more discoveries in the years and decades to come," said coauthor  Philip R. Goode at the CSTR in a statement.More on our solar system: Scientists Detect Mysterious Object in Deep Solar SystemShare This Article
    #new #imaging #technique #makes #sun
    New Imaging Technique Makes the Sun Look Like a Swirling Pink Liquid
    A swirling sea of pink, where fluffy tufts float majestically upward, while elsewhere violet plumes rain down from above. This is the Sun as seen in groundbreaking new images — and they're unlike anything you've ever laid eyes on.As detailed in a new study published in the journal Nature Astronomy, scientists have leveraged new coronal adaptive optics tech to bypass the blurriness caused by the turbulence of the Earth's atmosphere, a time-old obstacle that's frustrated astronomers' attempts to see features on our home star at a resolution better than 620 miles. Now, they've gotten it down to just under 40 miles — a light year sized leap.The result is some of the clearest images to date of the fine structures that make up the Sun's formidable corona, the outermost layer of its atmosphere known for its unbelievable temperatures and violent, unpredictable outbursts.The authors are optimistic that their blur-bypassing techniques will be a game-changer."These are by far the most detailed observations of this kind, showing features not previously observed, and it's not quite clear what they are," coauthor Vasyl Yurchyshyn, a research professor at the New Jersey Institute of Technology's Center for Terrestrial Research, said in a statement about the work."It is super exciting to build an instrument that shows us the Sun like never before," echoed lead author Dirk Schmidt, an adaptive optics scientist at the US National Solar Observatory.Stretching for millions of miles into space, the corona is the staging ground for the Sun's violent outbursts, which range from solar storms, to solar flares, to coronal mass ejections. One reason scientists are interested in these phenomena is because they continue to batter our own planet's atmosphere, playing a significant role in the Earth's climate and wreaking havoc on our electronics. Then, at a reach totally beyond our very limited human purview, is the corona's mighty solar wind, which sweeps across the entire solar system, shielding it from cosmic rays.But astronomers are still trying to understand how these solar phenomena occur. One abiding mystery is why the corona can reach temperatures in the millions of degrees Fahrenheit, when the Sun's surface it sits thousands of miles above is no more than a relatively cool 10,000 degrees. The conundrum even has a name: the coronal heating problem.The level of detailed captured in the latest images, taken with an adaptive optics system installed on the Goode Solar Telescope at the CSTR, could be transformative in probing these mysteries.One type of feature the unprecedented resolution revealed were solar prominences, which are large, flashy structures that protrude from the sun's surface, found in twisty shapes like arches or loops. A spectacular video shows a solar prominence swirling like a tortured water spout as it's whipped around by the sun's magnetic field.Most awe-inspiring of all are the examples of what's known as coronal rain. Appearing like waterfalls suspended in midair, the phenomenon is caused as plasma cools and condenses into huge globs before crashing down to the sun's surface. These were imaged at a scale smaller than 100 kilometers, or about 62 miles. In solar terms, that's pinpoint accuracy."With coronal adaptive optics now in operation, this marks the beginning of a new era in solar physics, promising many more discoveries in the years and decades to come," said coauthor  Philip R. Goode at the CSTR in a statement.More on our solar system: Scientists Detect Mysterious Object in Deep Solar SystemShare This Article #new #imaging #technique #makes #sun
    FUTURISM.COM
    New Imaging Technique Makes the Sun Look Like a Swirling Pink Liquid
    A swirling sea of pink, where fluffy tufts float majestically upward, while elsewhere violet plumes rain down from above. This is the Sun as seen in groundbreaking new images — and they're unlike anything you've ever laid eyes on.As detailed in a new study published in the journal Nature Astronomy, scientists have leveraged new coronal adaptive optics tech to bypass the blurriness caused by the turbulence of the Earth's atmosphere, a time-old obstacle that's frustrated astronomers' attempts to see features on our home star at a resolution better than 620 miles. Now, they've gotten it down to just under 40 miles — a light year sized leap.The result is some of the clearest images to date of the fine structures that make up the Sun's formidable corona, the outermost layer of its atmosphere known for its unbelievable temperatures and violent, unpredictable outbursts.The authors are optimistic that their blur-bypassing techniques will be a game-changer."These are by far the most detailed observations of this kind, showing features not previously observed, and it's not quite clear what they are," coauthor Vasyl Yurchyshyn, a research professor at the New Jersey Institute of Technology's Center for Terrestrial Research (CSTR), said in a statement about the work."It is super exciting to build an instrument that shows us the Sun like never before," echoed lead author Dirk Schmidt, an adaptive optics scientist at the US National Solar Observatory.Stretching for millions of miles into space, the corona is the staging ground for the Sun's violent outbursts, which range from solar storms, to solar flares, to coronal mass ejections. One reason scientists are interested in these phenomena is because they continue to batter our own planet's atmosphere, playing a significant role in the Earth's climate and wreaking havoc on our electronics. Then, at a reach totally beyond our very limited human purview, is the corona's mighty solar wind, which sweeps across the entire solar system, shielding it from cosmic rays.But astronomers are still trying to understand how these solar phenomena occur. One abiding mystery is why the corona can reach temperatures in the millions of degrees Fahrenheit, when the Sun's surface it sits thousands of miles above is no more than a relatively cool 10,000 degrees. The conundrum even has a name: the coronal heating problem.The level of detailed captured in the latest images, taken with an adaptive optics system installed on the Goode Solar Telescope at the CSTR, could be transformative in probing these mysteries.One type of feature the unprecedented resolution revealed were solar prominences, which are large, flashy structures that protrude from the sun's surface, found in twisty shapes like arches or loops. A spectacular video shows a solar prominence swirling like a tortured water spout as it's whipped around by the sun's magnetic field.Most awe-inspiring of all are the examples of what's known as coronal rain. Appearing like waterfalls suspended in midair, the phenomenon is caused as plasma cools and condenses into huge globs before crashing down to the sun's surface. These were imaged at a scale smaller than 100 kilometers, or about 62 miles. In solar terms, that's pinpoint accuracy."With coronal adaptive optics now in operation, this marks the beginning of a new era in solar physics, promising many more discoveries in the years and decades to come," said coauthor  Philip R. Goode at the CSTR in a statement.More on our solar system: Scientists Detect Mysterious Object in Deep Solar SystemShare This Article
    0 Comments 0 Shares
  • From the "Department of No" to a "Culture of Yes": A Healthcare CISO's Journey to Enabling Modern Care

    May 30, 2025The Hacker NewsHealthcare / Zero Trust

    Breaking Out of the Security Mosh Pit
    When Jason Elrod, CISO of MultiCare Health System, describes legacy healthcare IT environments, he doesn't mince words: "Healthcare loves to walk backwards into the future. And this is how we got here, because there are a lot of things that we could have prepared for that we didn't, because we were so concentrated on where we were."
    This chaotic approach has characterized healthcare IT for decades. In a sector where lives depend on technology working flawlessly 24/7/365, security teams have traditionally functioned as gatekeepers—the "Department of No"—focused on protection at the expense of innovation and care delivery.
    But as healthcare continues its digital transformation journey, this approach is no longer sustainable. With 14 hospitals, hundreds of urgent care clinics, and nearly 30,000 employees serving millions of patients, MultiCare needed a different path forward – one that didn't sacrifice innovation for safety. That shift began with a mindset change at the top that was driven by years of experience navigating these exact tensions.
    Jason Elrod's View: The Healthcare Security Conundrum
    After 15+ years as a healthcare CISO, Elrod has a unique perspective on the security challenges facing healthcare organizations. According to him, healthcare's specific operational realities create security dilemmas unlike any other industry:

    Always-on operations: "When can you take it down? When can you stop everything and upgrade it?" asks Elrod. Unlike other industries, healthcare operates 24/7/365 with little room for downtime.
    Life-or-death access requirements: "We have to make sure all the information they need is available when they need it, with the minimum amount of friction possible. Because it's me, it's you, it's our communities, it's our loved ones, it's life or death."
    Expanding attack surface: With the shift to telemedicine, remote work, and connected medical devices, the threat landscape has expanded dramatically. "It's like a bowl of spaghetti where each strand needs to be able to talk to one end or the other, but just to the strands it needs to."
    Misaligned incentives: "IT historically has been concentrated on availability and speed and access, ubiquitous access… And security says, 'That's a fantastic Lego car you built. Before you can go outside and play with it, I'm going to stick a bunch more Legos on top of it called security, privacy, and compliance.'"

    It's a recipe for burnout, blame, and breakdowns. But what if security could enable care instead of obstructing it?
    Watch how MultiCare turned that possibility into practice in the Elisity Microsegmentation Platform case study with Jason Elrod, CISO, MultiCare Health System.

    Identity: The Key to Modern Healthcare Security
    The breakthrough for MultiCare came with the implementation of identity-based microsegmentation through Elisity.
    "The biggest attack surface is the identity of every individual," notes Elrod. "Why are the attacks always on identity? Because in healthcare, we must make sure all the information is available when they need it, with the minimum amount of friction possible."
    Traditional network segmentation approaches relied on complex VLANs, firewalls, and endpoint agents. The result? "A Byzantine spaghetti mess" that became increasingly difficult to manage and update.
    Elisity's approach changed this paradigm by focusing on identity rather than network location:

    Dynamic security policies that follow users, workloads, and devices wherever they appear on the network
    Granular access controls that create security perimeters around individual assets
    Policy enforcement points that leverage existing infrastructure to implement microsegmentation without requiring new hardware, agents, or complex network reconfigurations

    From Skepticism to Transformation
    When Elrod first introduced Elisity to his team, they responded with healthy skepticism. "They're like, 'Did you hit your head? Are you sure you read what you were saying? I thought you stopped drinking,'" Elrod recalls.
    The technical teams were doubtful that such a microsegmentation solution could work with their existing infrastructure. "They said, 'That doesn't sound like something that can be done,'" shares Elrod.
    But seeing was believing. "When you see people who are deeply technical, people who just know their craft really well, and they see something and go 'Wow'… it shakes the pillars of their opinions about what can be done," explains Elrod.
    The Elisity solution delivered on its promises:

    Rapid implementation without disruptive network changes
    Real-time automated or manual policy adjustments that previously took weeks to implement
    Comprehensive visibility across previously siloed environments
    Enhanced security posture without compromising availability

    ...all without forcing a tradeoff between protection and performance.
    But what surprised Elrod most wasn't just what the technology did, but how it changed the people using it.Breaking Down Walls Between Teams
    Perhaps the most unexpected benefit was how the solution transformed relationships between teams.
    "There's been a friction point. Put this control and constraint around the network. Who's the first person to call? They're going to call IT. 'I can't do this thing.' And I'm saying, 'Well, you can't open everything, because everybody can't have everything. Because the bad guys will have everything then,'" Elrod explains.
    Identity-based microsegmentation changed this dynamic:
    "It changed from 'How do I get around you?' and 'How do you get around me?' to cooperation. Because now it's like, 'Oh, well, let's make that change together.' It shifted culturally, and this was not something I expected… We really are on the same team. This is a solution that works for all of us, makes all of our jobs better, Security and IT. It is a force multiplier across the organization," says Elrod.
    With Elisity, security and IT teams now share incentives rather than competing priorities. "The same thing that allows me to make connectivity work between this area and here in a frictionless fashion is also the same exact thing that provides the rationalized security around it. Same tool, same dashboard, same team," Elrod notes.
    Enabling a Culture of Yes
    For healthcare providers, the impact is profound. "If they don't have to worry about access, don't have to worry about the controls, they can take the cognitive load of thinking and worrying about the compliance factors of it, the security, the privacy, the technology underlying the table that they're working on," says Elrod.
    This shift enables a fundamental change in how security interacts with clinical staff:

    Speed of delivery: "We can do that at the speed of need as opposed to the speed of bureaucracy, the speed of technology, the speed of legacy," explains Elrod.
    Granular control: "How would you like your own segment on the network, wherever you may roam? I can base it on your identity, wherever you're at," Elrod shares.
    Enhanced trust: "Being able to instill that confidence that, 'Hey, it's secure, it's stable, it's scalable, it's functional, we can support it. And we can move at the pace that you want to move at.'"

    Breaking Down Silos: The Business Imperative of Security-IT Integration
    The traditional separation between security and IT operations teams is rapidly becoming obsolete as organizations recognize the strategic advantages of integration. Recent research demonstrates compelling business benefits for enterprises that successfully bridge this divide, particularly for those in manufacturing, industrial, and healthcare sectors.
    According to Skybox Security, 76% of organizations believe miscommunication between network and security teams has negatively impacted their security posture. This disconnect creates tangible security risks and operational inefficiencies. Conversely, organizations with unified security and IT operations reported 30% fewer significant security incidents compared to those with siloed teams.
    For healthcare organizations, the stakes are even higher. Among healthcare institutions that experienced ransomware attacks, those with siloed security and IT operations reported a 28% increase in patient mortality rates in 2024, up from 23% in 2023. This stark reality underscores that cybersecurity integration isn't just an operational consideration—it's a patient safety imperative.
    The financial case for integration is equally compelling. A Forrester Total Economic Impact study on ServiceNow Security Operations solutions demonstrated a 238% ROI and million in present value benefits, with a 6-month payback period when integrating security and IT operations.
    Forward-thinking organizations are adopting sophisticated integration models like Cyber Fusion Centers. Gartner research confirms these represent a significant advancement over traditional security operations, predicting that by 2028, 20% of large enterprises will shift to cyber-fraud fusion teams to combat internal and external adversaries, up from less than 5% in 2023.
    For enterprise leaders, the message is clear: breaking down operational silos between security and IT teams isn't just good practice—it's essential for comprehensive protection, operational efficiency, and competitive advantage in today's threat landscape. Few understand that better than Elrod, who's spent decades trying to bridge this gap both technologically and culturally.
    The Bridge to Modern Healthcare
    For Elrod, identity-based microsegmentation represents more than just a technology solution—it's a bridge between where healthcare has been and where it needs to go.
    "Technology in the past wasn't bought because it was crappy… They were great. Good intention. They did what they needed to do at the time. But there's a lot of temporal distance between now and when that made sense," he explains.
    Elisity helps MultiCare "build that bridge from where we have been to where we need to go… It's a ladder out of the pit. This is great. Let's stop throwing things in there. Let's actually do things in a rational fashion," says Elrod.
    Looking Ahead
    While no single solution can address all of healthcare's security challenges, identity-based microsegmentation is "one of the bricks on the yellow brick road to making healthcare security and technology the culture of Yes," according to Elrod.
    As healthcare organizations continue to balance security requirements with the need for frictionless care delivery, solutions that align these competing priorities will become increasingly essential.
    By implementing identity-based microsegmentation, MultiCare has transformed security from a barrier to an enabler of modern healthcare—proving that with the right approach, it's possible to create a culture where "yes" is the default response without compromising security or compliance.
    Ready to escape your own security "mosh pit" and build a bridge to modern healthcare? Download Elisity's Microsegmentation Buyer's Guide 2025. This resource equips healthcare security leaders with evaluation criteria, implementation strategies, and ROI frameworks that have helped organizations like MultiCare transform from the "Department of No" to a "Culture of Yes." Begin your journey toward identity-based security today. To learn more about Elisity and how we help transform healthcare organizations like MultiCare, visit our website here.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post.

    SHARE




    #quotdepartment #noquot #quotculture #yesquot #healthcare
    From the "Department of No" to a "Culture of Yes": A Healthcare CISO's Journey to Enabling Modern Care
    May 30, 2025The Hacker NewsHealthcare / Zero Trust Breaking Out of the Security Mosh Pit When Jason Elrod, CISO of MultiCare Health System, describes legacy healthcare IT environments, he doesn't mince words: "Healthcare loves to walk backwards into the future. And this is how we got here, because there are a lot of things that we could have prepared for that we didn't, because we were so concentrated on where we were." This chaotic approach has characterized healthcare IT for decades. In a sector where lives depend on technology working flawlessly 24/7/365, security teams have traditionally functioned as gatekeepers—the "Department of No"—focused on protection at the expense of innovation and care delivery. But as healthcare continues its digital transformation journey, this approach is no longer sustainable. With 14 hospitals, hundreds of urgent care clinics, and nearly 30,000 employees serving millions of patients, MultiCare needed a different path forward – one that didn't sacrifice innovation for safety. That shift began with a mindset change at the top that was driven by years of experience navigating these exact tensions. Jason Elrod's View: The Healthcare Security Conundrum After 15+ years as a healthcare CISO, Elrod has a unique perspective on the security challenges facing healthcare organizations. According to him, healthcare's specific operational realities create security dilemmas unlike any other industry: Always-on operations: "When can you take it down? When can you stop everything and upgrade it?" asks Elrod. Unlike other industries, healthcare operates 24/7/365 with little room for downtime. Life-or-death access requirements: "We have to make sure all the information they need is available when they need it, with the minimum amount of friction possible. Because it's me, it's you, it's our communities, it's our loved ones, it's life or death." Expanding attack surface: With the shift to telemedicine, remote work, and connected medical devices, the threat landscape has expanded dramatically. "It's like a bowl of spaghetti where each strand needs to be able to talk to one end or the other, but just to the strands it needs to." Misaligned incentives: "IT historically has been concentrated on availability and speed and access, ubiquitous access… And security says, 'That's a fantastic Lego car you built. Before you can go outside and play with it, I'm going to stick a bunch more Legos on top of it called security, privacy, and compliance.'" It's a recipe for burnout, blame, and breakdowns. But what if security could enable care instead of obstructing it? Watch how MultiCare turned that possibility into practice in the Elisity Microsegmentation Platform case study with Jason Elrod, CISO, MultiCare Health System. Identity: The Key to Modern Healthcare Security The breakthrough for MultiCare came with the implementation of identity-based microsegmentation through Elisity. "The biggest attack surface is the identity of every individual," notes Elrod. "Why are the attacks always on identity? Because in healthcare, we must make sure all the information is available when they need it, with the minimum amount of friction possible." Traditional network segmentation approaches relied on complex VLANs, firewalls, and endpoint agents. The result? "A Byzantine spaghetti mess" that became increasingly difficult to manage and update. Elisity's approach changed this paradigm by focusing on identity rather than network location: Dynamic security policies that follow users, workloads, and devices wherever they appear on the network Granular access controls that create security perimeters around individual assets Policy enforcement points that leverage existing infrastructure to implement microsegmentation without requiring new hardware, agents, or complex network reconfigurations From Skepticism to Transformation When Elrod first introduced Elisity to his team, they responded with healthy skepticism. "They're like, 'Did you hit your head? Are you sure you read what you were saying? I thought you stopped drinking,'" Elrod recalls. The technical teams were doubtful that such a microsegmentation solution could work with their existing infrastructure. "They said, 'That doesn't sound like something that can be done,'" shares Elrod. But seeing was believing. "When you see people who are deeply technical, people who just know their craft really well, and they see something and go 'Wow'… it shakes the pillars of their opinions about what can be done," explains Elrod. The Elisity solution delivered on its promises: Rapid implementation without disruptive network changes Real-time automated or manual policy adjustments that previously took weeks to implement Comprehensive visibility across previously siloed environments Enhanced security posture without compromising availability ...all without forcing a tradeoff between protection and performance. But what surprised Elrod most wasn't just what the technology did, but how it changed the people using it.Breaking Down Walls Between Teams Perhaps the most unexpected benefit was how the solution transformed relationships between teams. "There's been a friction point. Put this control and constraint around the network. Who's the first person to call? They're going to call IT. 'I can't do this thing.' And I'm saying, 'Well, you can't open everything, because everybody can't have everything. Because the bad guys will have everything then,'" Elrod explains. Identity-based microsegmentation changed this dynamic: "It changed from 'How do I get around you?' and 'How do you get around me?' to cooperation. Because now it's like, 'Oh, well, let's make that change together.' It shifted culturally, and this was not something I expected… We really are on the same team. This is a solution that works for all of us, makes all of our jobs better, Security and IT. It is a force multiplier across the organization," says Elrod. With Elisity, security and IT teams now share incentives rather than competing priorities. "The same thing that allows me to make connectivity work between this area and here in a frictionless fashion is also the same exact thing that provides the rationalized security around it. Same tool, same dashboard, same team," Elrod notes. Enabling a Culture of Yes For healthcare providers, the impact is profound. "If they don't have to worry about access, don't have to worry about the controls, they can take the cognitive load of thinking and worrying about the compliance factors of it, the security, the privacy, the technology underlying the table that they're working on," says Elrod. This shift enables a fundamental change in how security interacts with clinical staff: Speed of delivery: "We can do that at the speed of need as opposed to the speed of bureaucracy, the speed of technology, the speed of legacy," explains Elrod. Granular control: "How would you like your own segment on the network, wherever you may roam? I can base it on your identity, wherever you're at," Elrod shares. Enhanced trust: "Being able to instill that confidence that, 'Hey, it's secure, it's stable, it's scalable, it's functional, we can support it. And we can move at the pace that you want to move at.'" Breaking Down Silos: The Business Imperative of Security-IT Integration The traditional separation between security and IT operations teams is rapidly becoming obsolete as organizations recognize the strategic advantages of integration. Recent research demonstrates compelling business benefits for enterprises that successfully bridge this divide, particularly for those in manufacturing, industrial, and healthcare sectors. According to Skybox Security, 76% of organizations believe miscommunication between network and security teams has negatively impacted their security posture. This disconnect creates tangible security risks and operational inefficiencies. Conversely, organizations with unified security and IT operations reported 30% fewer significant security incidents compared to those with siloed teams. For healthcare organizations, the stakes are even higher. Among healthcare institutions that experienced ransomware attacks, those with siloed security and IT operations reported a 28% increase in patient mortality rates in 2024, up from 23% in 2023. This stark reality underscores that cybersecurity integration isn't just an operational consideration—it's a patient safety imperative. The financial case for integration is equally compelling. A Forrester Total Economic Impact study on ServiceNow Security Operations solutions demonstrated a 238% ROI and million in present value benefits, with a 6-month payback period when integrating security and IT operations. Forward-thinking organizations are adopting sophisticated integration models like Cyber Fusion Centers. Gartner research confirms these represent a significant advancement over traditional security operations, predicting that by 2028, 20% of large enterprises will shift to cyber-fraud fusion teams to combat internal and external adversaries, up from less than 5% in 2023. For enterprise leaders, the message is clear: breaking down operational silos between security and IT teams isn't just good practice—it's essential for comprehensive protection, operational efficiency, and competitive advantage in today's threat landscape. Few understand that better than Elrod, who's spent decades trying to bridge this gap both technologically and culturally. The Bridge to Modern Healthcare For Elrod, identity-based microsegmentation represents more than just a technology solution—it's a bridge between where healthcare has been and where it needs to go. "Technology in the past wasn't bought because it was crappy… They were great. Good intention. They did what they needed to do at the time. But there's a lot of temporal distance between now and when that made sense," he explains. Elisity helps MultiCare "build that bridge from where we have been to where we need to go… It's a ladder out of the pit. This is great. Let's stop throwing things in there. Let's actually do things in a rational fashion," says Elrod. Looking Ahead While no single solution can address all of healthcare's security challenges, identity-based microsegmentation is "one of the bricks on the yellow brick road to making healthcare security and technology the culture of Yes," according to Elrod. As healthcare organizations continue to balance security requirements with the need for frictionless care delivery, solutions that align these competing priorities will become increasingly essential. By implementing identity-based microsegmentation, MultiCare has transformed security from a barrier to an enabler of modern healthcare—proving that with the right approach, it's possible to create a culture where "yes" is the default response without compromising security or compliance. Ready to escape your own security "mosh pit" and build a bridge to modern healthcare? Download Elisity's Microsegmentation Buyer's Guide 2025. This resource equips healthcare security leaders with evaluation criteria, implementation strategies, and ROI frameworks that have helped organizations like MultiCare transform from the "Department of No" to a "Culture of Yes." Begin your journey toward identity-based security today. To learn more about Elisity and how we help transform healthcare organizations like MultiCare, visit our website here. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     #quotdepartment #noquot #quotculture #yesquot #healthcare
    THEHACKERNEWS.COM
    From the "Department of No" to a "Culture of Yes": A Healthcare CISO's Journey to Enabling Modern Care
    May 30, 2025The Hacker NewsHealthcare / Zero Trust Breaking Out of the Security Mosh Pit When Jason Elrod, CISO of MultiCare Health System, describes legacy healthcare IT environments, he doesn't mince words: "Healthcare loves to walk backwards into the future. And this is how we got here, because there are a lot of things that we could have prepared for that we didn't, because we were so concentrated on where we were." This chaotic approach has characterized healthcare IT for decades. In a sector where lives depend on technology working flawlessly 24/7/365, security teams have traditionally functioned as gatekeepers—the "Department of No"—focused on protection at the expense of innovation and care delivery. But as healthcare continues its digital transformation journey, this approach is no longer sustainable. With 14 hospitals, hundreds of urgent care clinics, and nearly 30,000 employees serving millions of patients, MultiCare needed a different path forward – one that didn't sacrifice innovation for safety. That shift began with a mindset change at the top that was driven by years of experience navigating these exact tensions. Jason Elrod's View: The Healthcare Security Conundrum After 15+ years as a healthcare CISO, Elrod has a unique perspective on the security challenges facing healthcare organizations. According to him, healthcare's specific operational realities create security dilemmas unlike any other industry: Always-on operations: "When can you take it down? When can you stop everything and upgrade it?" asks Elrod. Unlike other industries, healthcare operates 24/7/365 with little room for downtime. Life-or-death access requirements: "We have to make sure all the information they need is available when they need it, with the minimum amount of friction possible. Because it's me, it's you, it's our communities, it's our loved ones, it's life or death." Expanding attack surface: With the shift to telemedicine, remote work, and connected medical devices, the threat landscape has expanded dramatically. "It's like a bowl of spaghetti where each strand needs to be able to talk to one end or the other, but just to the strands it needs to." Misaligned incentives: "IT historically has been concentrated on availability and speed and access, ubiquitous access… And security says, 'That's a fantastic Lego car you built. Before you can go outside and play with it, I'm going to stick a bunch more Legos on top of it called security, privacy, and compliance.'" It's a recipe for burnout, blame, and breakdowns. But what if security could enable care instead of obstructing it? Watch how MultiCare turned that possibility into practice in the Elisity Microsegmentation Platform case study with Jason Elrod, CISO, MultiCare Health System. Identity: The Key to Modern Healthcare Security The breakthrough for MultiCare came with the implementation of identity-based microsegmentation through Elisity. "The biggest attack surface is the identity of every individual," notes Elrod. "Why are the attacks always on identity? Because in healthcare, we must make sure all the information is available when they need it, with the minimum amount of friction possible." Traditional network segmentation approaches relied on complex VLANs, firewalls, and endpoint agents. The result? "A Byzantine spaghetti mess" that became increasingly difficult to manage and update. Elisity's approach changed this paradigm by focusing on identity rather than network location: Dynamic security policies that follow users, workloads, and devices wherever they appear on the network Granular access controls that create security perimeters around individual assets Policy enforcement points that leverage existing infrastructure to implement microsegmentation without requiring new hardware, agents, or complex network reconfigurations From Skepticism to Transformation When Elrod first introduced Elisity to his team, they responded with healthy skepticism. "They're like, 'Did you hit your head? Are you sure you read what you were saying? I thought you stopped drinking,'" Elrod recalls. The technical teams were doubtful that such a microsegmentation solution could work with their existing infrastructure. "They said, 'That doesn't sound like something that can be done,'" shares Elrod. But seeing was believing. "When you see people who are deeply technical, people who just know their craft really well, and they see something and go 'Wow'… it shakes the pillars of their opinions about what can be done," explains Elrod. The Elisity solution delivered on its promises: Rapid implementation without disruptive network changes Real-time automated or manual policy adjustments that previously took weeks to implement Comprehensive visibility across previously siloed environments Enhanced security posture without compromising availability ...all without forcing a tradeoff between protection and performance. But what surprised Elrod most wasn't just what the technology did, but how it changed the people using it.[JE2] Breaking Down Walls Between Teams Perhaps the most unexpected benefit was how the solution transformed relationships between teams. "There's been a friction point. Put this control and constraint around the network. Who's the first person to call? They're going to call IT. 'I can't do this thing.' And I'm saying, 'Well, you can't open everything, because everybody can't have everything. Because the bad guys will have everything then,'" Elrod explains. Identity-based microsegmentation changed this dynamic: "It changed from 'How do I get around you?' and 'How do you get around me?' to cooperation. Because now it's like, 'Oh, well, let's make that change together.' It shifted culturally, and this was not something I expected… We really are on the same team. This is a solution that works for all of us, makes all of our jobs better, Security and IT. It is a force multiplier across the organization," says Elrod. With Elisity, security and IT teams now share incentives rather than competing priorities. "The same thing that allows me to make connectivity work between this area and here in a frictionless fashion is also the same exact thing that provides the rationalized security around it. Same tool, same dashboard, same team," Elrod notes. Enabling a Culture of Yes For healthcare providers, the impact is profound. "If they don't have to worry about access, don't have to worry about the controls, they can take the cognitive load of thinking and worrying about the compliance factors of it, the security, the privacy, the technology underlying the table that they're working on," says Elrod. This shift enables a fundamental change in how security interacts with clinical staff: Speed of delivery: "We can do that at the speed of need as opposed to the speed of bureaucracy, the speed of technology, the speed of legacy," explains Elrod. Granular control: "How would you like your own segment on the network, wherever you may roam? I can base it on your identity, wherever you're at," Elrod shares. Enhanced trust: "Being able to instill that confidence that, 'Hey, it's secure, it's stable, it's scalable, it's functional, we can support it. And we can move at the pace that you want to move at.'" Breaking Down Silos: The Business Imperative of Security-IT Integration The traditional separation between security and IT operations teams is rapidly becoming obsolete as organizations recognize the strategic advantages of integration. Recent research demonstrates compelling business benefits for enterprises that successfully bridge this divide, particularly for those in manufacturing, industrial, and healthcare sectors. According to Skybox Security (2025), 76% of organizations believe miscommunication between network and security teams has negatively impacted their security posture. This disconnect creates tangible security risks and operational inefficiencies. Conversely, organizations with unified security and IT operations reported 30% fewer significant security incidents compared to those with siloed teams. For healthcare organizations, the stakes are even higher. Among healthcare institutions that experienced ransomware attacks, those with siloed security and IT operations reported a 28% increase in patient mortality rates in 2024, up from 23% in 2023 (Ponemon Institute & Proofpoint, 2024). This stark reality underscores that cybersecurity integration isn't just an operational consideration—it's a patient safety imperative. The financial case for integration is equally compelling. A Forrester Total Economic Impact study on ServiceNow Security Operations solutions demonstrated a 238% ROI and $6.2 million in present value benefits, with a 6-month payback period when integrating security and IT operations (Forrester/ServiceNow, 2024). Forward-thinking organizations are adopting sophisticated integration models like Cyber Fusion Centers. Gartner research confirms these represent a significant advancement over traditional security operations, predicting that by 2028, 20% of large enterprises will shift to cyber-fraud fusion teams to combat internal and external adversaries, up from less than 5% in 2023. For enterprise leaders, the message is clear: breaking down operational silos between security and IT teams isn't just good practice—it's essential for comprehensive protection, operational efficiency, and competitive advantage in today's threat landscape. Few understand that better than Elrod, who's spent decades trying to bridge this gap both technologically and culturally. The Bridge to Modern Healthcare For Elrod, identity-based microsegmentation represents more than just a technology solution—it's a bridge between where healthcare has been and where it needs to go. "Technology in the past wasn't bought because it was crappy… They were great. Good intention. They did what they needed to do at the time. But there's a lot of temporal distance between now and when that made sense," he explains. Elisity helps MultiCare "build that bridge from where we have been to where we need to go… It's a ladder out of the pit. This is great. Let's stop throwing things in there. Let's actually do things in a rational fashion," says Elrod. Looking Ahead While no single solution can address all of healthcare's security challenges, identity-based microsegmentation is "one of the bricks on the yellow brick road to making healthcare security and technology the culture of Yes," according to Elrod. As healthcare organizations continue to balance security requirements with the need for frictionless care delivery, solutions that align these competing priorities will become increasingly essential. By implementing identity-based microsegmentation, MultiCare has transformed security from a barrier to an enabler of modern healthcare—proving that with the right approach, it's possible to create a culture where "yes" is the default response without compromising security or compliance. Ready to escape your own security "mosh pit" and build a bridge to modern healthcare? Download Elisity's Microsegmentation Buyer's Guide 2025. This resource equips healthcare security leaders with evaluation criteria, implementation strategies, and ROI frameworks that have helped organizations like MultiCare transform from the "Department of No" to a "Culture of Yes." Begin your journey toward identity-based security today. To learn more about Elisity and how we help transform healthcare organizations like MultiCare, visit our website here. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    0 Comments 0 Shares
  • ‘A Minecraft Movie’: Wētā FX Helps Adapt an Iconic Game One Block at a Time

    Adapting the iconic, block-based design aesthetic of Mojang’s beloved Minecraft videogame into the hit feature film comedy adventure, The Minecraft Movie, posed an enormous number of hurdles for director Jared Hess and Oscar-winning Production VFX Supervisor Dan Lemmon. Tasked with helping translate the iconic pixelated world into something cinematically engaging, while remaining true to its visual DNA, was Wētā FX, who delivered 450 VFX shots on the film. And two of their key leads on the film were VFX Supervisor Sheldon Stopsack and Animation Supervisor Kevin Estey. 
    But the shot count merely scratches the surface of the extensive work the studio performed. Wētā led the design and creation of The Overworld, 64 unique terrains spanning deserts, lush forests, oceans, and mountain ranges, all combined into one continuous environment, assets that were also shared with Digital Domain for their work on the 3rd act battle. Wētā also handled extensive work on the lava-filled hellscape of The Nether that involved Unreal Engine for early representations used in previs, scene scouting, and onset during principal photography, before refining the environment during post-production. They also dressed The Nether with lava, fire, and torches, along with atmospherics and particulate like smoke, ash, and embers.

    But wait… there’s more!
    The studio’s Art Department, working closely with Hess, co-created the look and feel of all digital characters in the film. For Malgosha’s henchmen, the Piglins, Wētā designed and created 12 different variants, all with individual characteristics and personalities. They also designed sheep, bees, pandas, zombies, skeletons, and lovable wolf Dennis. Many of these characters were provided to other vendors for their work on the film.
    Needless to say, the studio truly became a “Master Builder” on the show.

    The film is based on the hugely popular game Minecraft, first released by Sweden’s Mojang Studios in 2011 and purchased by Microsoft for billion in 2014, which immerses players in a low-res, pixelated “sandbox” simulation where they can use blocks to build entire worlds. 
    Here's the final trailer:

    In a far-ranging interview, Stopsack and Estey shared with AWN a peek into their creative process, from early design exploration to creation of an intricate practical cloak for Malgosha and the use of Unreal Engine for previs, postvis, and real-time onset visualization.
    Dan Sarto: The film is filled with distinct settings and characters sporting various “block” styled features. Can you share some of the work you did on the environments, character design, and character animation?
    Sheldon Stopsack: There's, there's so much to talk about and truth to be told, if you were to touch on everything, we would probably need to spend the whole day together. 
    Kevin Estey: Sheldon and I realized that when we talk about the film, either amongst ourselves or with someone else, we could just keep going, there are so many stories to tell.
    DS: Well, start with The Overworld and The Nether. How did the design process begin? What did you have to work with?
    SS: Visual effects is a tricky business, you know. It's always difficult. Always challenging. However, Minecraft stood out to us as not your usual quote unquote standard visual effects project, even though as you know, there is no standard visual effects project because they're all somehow different. They all come with their own creative ideas, inspirations, and challenges. But Minecraft, right from the get-go, was different, simply by the fact that when you first consider the idea of making such a live-action movie, you instantly ask yourself, “How do we make this work? How do we combine these two inherently very, very different but unique worlds?” That was everyone’s number one question. How do we land this? Where do we land this? And I don't think that any of us really had an answer, including our clients, Dan Lemmonand Jared Hess. Everyone was really open for this journey. That's compelling for us, to get out of our comfort zone. It makes you nervous because there are no real obvious answers.
    KE: Early on, we seemed to thrive off these kinds of scary creative challenges. There were lots of question marks. We had many moments when we were trying to figure out character designs. We had a template from the game, but it was an incredibly vague, low-resolution template. And there were so many ways that we could go. But that design discovery throughout the project was really satisfying. 

    DS: Game adaptations are never simple. There usually isn’t much in the way of story. But with Minecraft, from a visual standpoint, how did you translate low res, block-styled characters into something entertaining that could sustain a 100-minute feature film?
    SS: Everything was a question mark. Using the lava that you see in The Nether as one example, we had beautiful concept art for all our environments, The Overworld and The Nether, but those concepts only really took you this far. They didn’t represent the block shapes or give you a clear answer of like how realistic some of those materials, shapes and structures would be. How organic would we go? All of this needed to be explored. For the lava, we had stylized concept pieces, with block shaped viscosity as it flowed down. But we spent months with our effects team, and Dan and Jared, just riffing on ideas. We came full circle, with the lava ending up being more realistic, a naturally viscous liquid based on real physics. And the same goes with the waterfall that you see in the Overworld. 
    The question is, how far do we take things into the true Minecraft representation of things? How much do we scale back a little bit and ground ourselves in reality, with effects we’re quite comfortable producing as a company? There's always a tradeoff to find that balance of how best to combine what’s been filmed, the practical sets and live-action performances, with effects. Where’s the sweet spot? What's the level of abstraction? What's honest to the game? As much as some call Minecraft a simple game, it isn't simple, right? It's incredibly complex. It's got a set of rules and logic to the world building process within the game that we had to learn, adapt, and honor in many ways.
    When our misfits first arrive and we have these big vistas and establishing shots, when you really look at it, you, you recognize a lot of the things that we tried to adapt from the game. There are different biomes, like the Badlands, which is very sand stoney; there's the Woodlands, which is a lush environment with cherry blossom trees; you’ve got the snow biome with big mountains in the background. Our intent was to honor the game.
    KE: I took a big cue from a lot of the early designs, and particularly the approach that Jared liked for the characters and to the design in general, which was maintaining the stylized, blocky aesthetic, but covering them in realistic flesh, fur, things that were going to make them appear as real as possible despite the absolutely unreal designs of their bodies. And so essentially, it was squared skeleton… squarish bones with flesh and realistic fur laid over top. We tried various things, all extremely stylized. The Creepers are a good example. We tried all kinds of ways for them to explode. Sheldon found a great reference for a cat coughing up a hairball. He was nice to censor the worst part of it, but those undulations in the chest and ribcage… Jared spoke of the Creepers being basically tragic characters that only wanted to be loved, to just be close to you. But sadly, whenever they did, they’d explode. So, we experimented with a lot of different motions of how they’d explode.

    DS: Talk about the process of determining how these characters would move. None seem to have remotely realistic proportions in their limbs, bodies, or head size.
    KE: There were a couple things that Jared always seemed to be chasing. One was just something that would make him laugh. Of course, it had to sit within the bounds of how a zombie might move, or a skeleton might move, as we were interpreting the game. But the main thing was just, was it fun and funny? I still remember one of the earliest gags they came up with in mocap sessions, even before I even joined the show, was how the zombies get up after they fall over. It was sort of like a tripod, where its face and feet were planted and its butt shoots up in the air.
    After a lot of experimentation, we came up with basic personality types for each character. There were 12 different types of Piglins. The zombies were essentially like you're coming home from the pub after a few too many pints and you're just trying to get in the door, but you can't find your keys. Loose, slightly inebriated movement. The best movement we found for the skeletons was essentially like an old man with rigid limbs and lack of ligaments that was chasing kids off his lawn. And so, we created this kind of bible of performance types that really helped guide performers on the mocap stage and animators later on.
    SS: A lot of our exploration didn’t stick. But Jared was the expert in all of this. He always came up with some quirky last-minute idea. 
    KE: My favorite from Jared came in the middle of one mocap shoot. He walked up to me and said he had this stupid idea. I said OK, go on. He said, what if Malgosha had these two little pigs next to her, like Catholic alter boys, swinging incense. Can we do that? I talked to our stage manager, and we quickly put together a temporary prop for the incense burners. And we got two performers who just stood there. What are they going to do? Jared said, “Nothing. Just stand there and swing. I think it would look funny.” So, that’s what we did.  We dubbed them the Priesty Boys. And they are there throughout the film. That was amazing about Jared. He was always like, let's just try it, see if it works. Otherwise ditch it.

    DS: Tell me about your work on Malgosha. And I also want to discuss your use of Unreal Engine and the previs and postvis work. 
    SS: For Malgosha as a character, our art department did a phenomenal job finding the character design at the concept phase. But it was a collective effort. So many contributors were involved in her making. And I'm not just talking about the digital artists here on our side. It was a joint venture of different people having different explorations and experiments. It started off with the concept work as a foundation, which we mocked up with 3D sketches before building a model. But with Malgosha, we also had the costume department on the production side building this elaborate cloak. Remember, that cloak kind of makes 80, 85% of her appearance. It's almost like a character in itself, the way we utilized it. And the costume department built this beautiful, elaborate, incredibly intricate, practical version of it that we intended to use on set for the performer to wear. It ended up being too impractical because it was too heavy. But it was beautiful. So, while we didn't really use it on set, it gave us something physically to kind of incorporate into our digital version.
    KE: Alan Henry is the motion performer who portrayed her on set and on the mocap stage. I've known him for close to 15 years. I started working with him on The Hobbit films. He was a stunt performer who eventually rolled into doing motion capture with us on The Hobbit. He’s an incredible actor and absolutely hilarious and can adapt to any sort of situation. He’s so improvisational. He came up with an approach to Malgosha very quickly. Added a limp so that she felt decrepit, leaning on the staff, adding her other arm as kind of like a gimp arm that she would point and gesture with.  
    Even though she’s a blocky character, her anatomy is very much a biped, with rounder limbs than the other Piglins. She's got hooves, is somewhat squarish, and her much more bulky mass in the middle was easier to manipulate and move around. Because she would have to battle with Steve in the end, she had to have a level of agility that even some of the Piglins didn't have.

    DS: Did Unreal Engine come into play with her? 
    SS: Unreal was used all the way through the project. Dan Lemmon and his team early on set up their own virtual art department to build representations of the Overworld and the Nether within the context of Unreal. We and Sony Imageworks tried to provide recreations of these environments that were then used within Unreal to previsualize what was happening on set during shooting of principal photography. And that's where our mocap and on-set teams were coming into play. Effects provided what we called the Nudge Cam. It was a system to do real-time tracking using a stereo pair of Basler computer vision cameras that were mounted onto the sides of the principal camera. We provided the live tracking that was then composited in real time with the Unreal Engine content that all the vendors had provided. It was a great way of utilizing Unreal to give the camera operators or DOP, even Jared, a good sense of what we would actually shoot. It gave everyone a little bit of context for the look and feel of what you could actually expect from these scenes. 
    Because we started this journey with Unreal having onset in mind, we internally decided, look, let's take this further. Let's take this into post-production as well. What would it take to utilize Unreal for shot creation? And it was really exclusively used on the Nether environment. I don’t want to say we used it for matte painting replacement. We used it more for say, let's build this extended environment in Unreal. Not only use it as a render engine with this reasonably fast turnaround but also use it for what it's good at: authoring things, quickly changing things, moving columns around, manipulating things, dressing them, lighting them, and rendering them. It became sort of a tool that we used in place of a traditional matte painting for the extended environments.
    KE: Another thing worth mentioning is we were able to utilize it on our mocap stage as well during the two-week shoot with Jared and crew. When we shoot on the mocap stage, we get a very simple sort of gray shaded diagnostic grid. You have your single-color characters that sometimes are textured, but they’re fairly simple without any context of environment. Our special projects team was able to port what we usually see in Giant, the software we use on the mocap stage, into Unreal, which gave us these beautifully lit environments with interactive fire and atmosphere. And Jared and the team could see their movie for the first time in a rough, but still very beautiful rough state. That was invaluable.

    DS: If you had to key on anything, what would say with the biggest challenges for your teams on the film? You're laughing. I can hear you thinking, “Do we have an hour?” 
    KE: Where do you begin? 
    SS: Exactly. It's so hard to really single one out. And I struggle with that question every time I've been asked that question.
    KE: I’ll start.  I've got a very simple practical answer and then a larger one, something that was new to us, kind of similar to what we were just talking about. The simple practical one is the Piglins square feet with no ankles. It was very tough to make them walk realistically. Think of the leg of a chair. How do you make that roll and bank and bend because there is no joint? There are a lot of Piglins walking on surfaces and it was a very difficult conundrum to solve. It took a lot of hard work from our motion edit team and our animation team to get those things walking realistically. You know, it’s doing that simple thing that you don't usually pay attention to. So that was one reasonably big challenge that is often literally buried in the shadows. The bigger one was something that was new to me. We often do a lot of our previs and postvis in-house and then finish the shots. And just because of circumstances and capacity, we did the postvis for the entire final battle, but we ended up sharing the sequence with Digital Domain, who did an amazing job completing some of the stuff on the Battlefield we did post on. For me personally, I've never experienced not finishing what I started. But it was also really rewarding to see how well the work we had put in was honored by DD when they took it over.  
    SS: I think the biggest challenge and the biggest achievement that I'm most proud of is really ending up with something that was well received by the wider audience. Of creating these two worlds, this sort of abstract adaptation of the Minecraft game and combining it with live-action. That was the achievement for me. That was the biggest challenge. We were all nervous from day one. And we continued to be nervous up until the day the movie came out. None of us really knew how it ultimately would be received. The fact that it came together and was so well received is a testament to everyone doing a fantastic job. And that's what I'm incredibly proud of.

    Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.
    #minecraft #movie #wētā #helps #adapt
    ‘A Minecraft Movie’: Wētā FX Helps Adapt an Iconic Game One Block at a Time
    Adapting the iconic, block-based design aesthetic of Mojang’s beloved Minecraft videogame into the hit feature film comedy adventure, The Minecraft Movie, posed an enormous number of hurdles for director Jared Hess and Oscar-winning Production VFX Supervisor Dan Lemmon. Tasked with helping translate the iconic pixelated world into something cinematically engaging, while remaining true to its visual DNA, was Wētā FX, who delivered 450 VFX shots on the film. And two of their key leads on the film were VFX Supervisor Sheldon Stopsack and Animation Supervisor Kevin Estey.  But the shot count merely scratches the surface of the extensive work the studio performed. Wētā led the design and creation of The Overworld, 64 unique terrains spanning deserts, lush forests, oceans, and mountain ranges, all combined into one continuous environment, assets that were also shared with Digital Domain for their work on the 3rd act battle. Wētā also handled extensive work on the lava-filled hellscape of The Nether that involved Unreal Engine for early representations used in previs, scene scouting, and onset during principal photography, before refining the environment during post-production. They also dressed The Nether with lava, fire, and torches, along with atmospherics and particulate like smoke, ash, and embers. But wait… there’s more! The studio’s Art Department, working closely with Hess, co-created the look and feel of all digital characters in the film. For Malgosha’s henchmen, the Piglins, Wētā designed and created 12 different variants, all with individual characteristics and personalities. They also designed sheep, bees, pandas, zombies, skeletons, and lovable wolf Dennis. Many of these characters were provided to other vendors for their work on the film. Needless to say, the studio truly became a “Master Builder” on the show. The film is based on the hugely popular game Minecraft, first released by Sweden’s Mojang Studios in 2011 and purchased by Microsoft for billion in 2014, which immerses players in a low-res, pixelated “sandbox” simulation where they can use blocks to build entire worlds.  Here's the final trailer: In a far-ranging interview, Stopsack and Estey shared with AWN a peek into their creative process, from early design exploration to creation of an intricate practical cloak for Malgosha and the use of Unreal Engine for previs, postvis, and real-time onset visualization. Dan Sarto: The film is filled with distinct settings and characters sporting various “block” styled features. Can you share some of the work you did on the environments, character design, and character animation? Sheldon Stopsack: There's, there's so much to talk about and truth to be told, if you were to touch on everything, we would probably need to spend the whole day together.  Kevin Estey: Sheldon and I realized that when we talk about the film, either amongst ourselves or with someone else, we could just keep going, there are so many stories to tell. DS: Well, start with The Overworld and The Nether. How did the design process begin? What did you have to work with? SS: Visual effects is a tricky business, you know. It's always difficult. Always challenging. However, Minecraft stood out to us as not your usual quote unquote standard visual effects project, even though as you know, there is no standard visual effects project because they're all somehow different. They all come with their own creative ideas, inspirations, and challenges. But Minecraft, right from the get-go, was different, simply by the fact that when you first consider the idea of making such a live-action movie, you instantly ask yourself, “How do we make this work? How do we combine these two inherently very, very different but unique worlds?” That was everyone’s number one question. How do we land this? Where do we land this? And I don't think that any of us really had an answer, including our clients, Dan Lemmonand Jared Hess. Everyone was really open for this journey. That's compelling for us, to get out of our comfort zone. It makes you nervous because there are no real obvious answers. KE: Early on, we seemed to thrive off these kinds of scary creative challenges. There were lots of question marks. We had many moments when we were trying to figure out character designs. We had a template from the game, but it was an incredibly vague, low-resolution template. And there were so many ways that we could go. But that design discovery throughout the project was really satisfying.  DS: Game adaptations are never simple. There usually isn’t much in the way of story. But with Minecraft, from a visual standpoint, how did you translate low res, block-styled characters into something entertaining that could sustain a 100-minute feature film? SS: Everything was a question mark. Using the lava that you see in The Nether as one example, we had beautiful concept art for all our environments, The Overworld and The Nether, but those concepts only really took you this far. They didn’t represent the block shapes or give you a clear answer of like how realistic some of those materials, shapes and structures would be. How organic would we go? All of this needed to be explored. For the lava, we had stylized concept pieces, with block shaped viscosity as it flowed down. But we spent months with our effects team, and Dan and Jared, just riffing on ideas. We came full circle, with the lava ending up being more realistic, a naturally viscous liquid based on real physics. And the same goes with the waterfall that you see in the Overworld.  The question is, how far do we take things into the true Minecraft representation of things? How much do we scale back a little bit and ground ourselves in reality, with effects we’re quite comfortable producing as a company? There's always a tradeoff to find that balance of how best to combine what’s been filmed, the practical sets and live-action performances, with effects. Where’s the sweet spot? What's the level of abstraction? What's honest to the game? As much as some call Minecraft a simple game, it isn't simple, right? It's incredibly complex. It's got a set of rules and logic to the world building process within the game that we had to learn, adapt, and honor in many ways. When our misfits first arrive and we have these big vistas and establishing shots, when you really look at it, you, you recognize a lot of the things that we tried to adapt from the game. There are different biomes, like the Badlands, which is very sand stoney; there's the Woodlands, which is a lush environment with cherry blossom trees; you’ve got the snow biome with big mountains in the background. Our intent was to honor the game. KE: I took a big cue from a lot of the early designs, and particularly the approach that Jared liked for the characters and to the design in general, which was maintaining the stylized, blocky aesthetic, but covering them in realistic flesh, fur, things that were going to make them appear as real as possible despite the absolutely unreal designs of their bodies. And so essentially, it was squared skeleton… squarish bones with flesh and realistic fur laid over top. We tried various things, all extremely stylized. The Creepers are a good example. We tried all kinds of ways for them to explode. Sheldon found a great reference for a cat coughing up a hairball. He was nice to censor the worst part of it, but those undulations in the chest and ribcage… Jared spoke of the Creepers being basically tragic characters that only wanted to be loved, to just be close to you. But sadly, whenever they did, they’d explode. So, we experimented with a lot of different motions of how they’d explode. DS: Talk about the process of determining how these characters would move. None seem to have remotely realistic proportions in their limbs, bodies, or head size. KE: There were a couple things that Jared always seemed to be chasing. One was just something that would make him laugh. Of course, it had to sit within the bounds of how a zombie might move, or a skeleton might move, as we were interpreting the game. But the main thing was just, was it fun and funny? I still remember one of the earliest gags they came up with in mocap sessions, even before I even joined the show, was how the zombies get up after they fall over. It was sort of like a tripod, where its face and feet were planted and its butt shoots up in the air. After a lot of experimentation, we came up with basic personality types for each character. There were 12 different types of Piglins. The zombies were essentially like you're coming home from the pub after a few too many pints and you're just trying to get in the door, but you can't find your keys. Loose, slightly inebriated movement. The best movement we found for the skeletons was essentially like an old man with rigid limbs and lack of ligaments that was chasing kids off his lawn. And so, we created this kind of bible of performance types that really helped guide performers on the mocap stage and animators later on. SS: A lot of our exploration didn’t stick. But Jared was the expert in all of this. He always came up with some quirky last-minute idea.  KE: My favorite from Jared came in the middle of one mocap shoot. He walked up to me and said he had this stupid idea. I said OK, go on. He said, what if Malgosha had these two little pigs next to her, like Catholic alter boys, swinging incense. Can we do that? I talked to our stage manager, and we quickly put together a temporary prop for the incense burners. And we got two performers who just stood there. What are they going to do? Jared said, “Nothing. Just stand there and swing. I think it would look funny.” So, that’s what we did.  We dubbed them the Priesty Boys. And they are there throughout the film. That was amazing about Jared. He was always like, let's just try it, see if it works. Otherwise ditch it. DS: Tell me about your work on Malgosha. And I also want to discuss your use of Unreal Engine and the previs and postvis work.  SS: For Malgosha as a character, our art department did a phenomenal job finding the character design at the concept phase. But it was a collective effort. So many contributors were involved in her making. And I'm not just talking about the digital artists here on our side. It was a joint venture of different people having different explorations and experiments. It started off with the concept work as a foundation, which we mocked up with 3D sketches before building a model. But with Malgosha, we also had the costume department on the production side building this elaborate cloak. Remember, that cloak kind of makes 80, 85% of her appearance. It's almost like a character in itself, the way we utilized it. And the costume department built this beautiful, elaborate, incredibly intricate, practical version of it that we intended to use on set for the performer to wear. It ended up being too impractical because it was too heavy. But it was beautiful. So, while we didn't really use it on set, it gave us something physically to kind of incorporate into our digital version. KE: Alan Henry is the motion performer who portrayed her on set and on the mocap stage. I've known him for close to 15 years. I started working with him on The Hobbit films. He was a stunt performer who eventually rolled into doing motion capture with us on The Hobbit. He’s an incredible actor and absolutely hilarious and can adapt to any sort of situation. He’s so improvisational. He came up with an approach to Malgosha very quickly. Added a limp so that she felt decrepit, leaning on the staff, adding her other arm as kind of like a gimp arm that she would point and gesture with.   Even though she’s a blocky character, her anatomy is very much a biped, with rounder limbs than the other Piglins. She's got hooves, is somewhat squarish, and her much more bulky mass in the middle was easier to manipulate and move around. Because she would have to battle with Steve in the end, she had to have a level of agility that even some of the Piglins didn't have. DS: Did Unreal Engine come into play with her?  SS: Unreal was used all the way through the project. Dan Lemmon and his team early on set up their own virtual art department to build representations of the Overworld and the Nether within the context of Unreal. We and Sony Imageworks tried to provide recreations of these environments that were then used within Unreal to previsualize what was happening on set during shooting of principal photography. And that's where our mocap and on-set teams were coming into play. Effects provided what we called the Nudge Cam. It was a system to do real-time tracking using a stereo pair of Basler computer vision cameras that were mounted onto the sides of the principal camera. We provided the live tracking that was then composited in real time with the Unreal Engine content that all the vendors had provided. It was a great way of utilizing Unreal to give the camera operators or DOP, even Jared, a good sense of what we would actually shoot. It gave everyone a little bit of context for the look and feel of what you could actually expect from these scenes.  Because we started this journey with Unreal having onset in mind, we internally decided, look, let's take this further. Let's take this into post-production as well. What would it take to utilize Unreal for shot creation? And it was really exclusively used on the Nether environment. I don’t want to say we used it for matte painting replacement. We used it more for say, let's build this extended environment in Unreal. Not only use it as a render engine with this reasonably fast turnaround but also use it for what it's good at: authoring things, quickly changing things, moving columns around, manipulating things, dressing them, lighting them, and rendering them. It became sort of a tool that we used in place of a traditional matte painting for the extended environments. KE: Another thing worth mentioning is we were able to utilize it on our mocap stage as well during the two-week shoot with Jared and crew. When we shoot on the mocap stage, we get a very simple sort of gray shaded diagnostic grid. You have your single-color characters that sometimes are textured, but they’re fairly simple without any context of environment. Our special projects team was able to port what we usually see in Giant, the software we use on the mocap stage, into Unreal, which gave us these beautifully lit environments with interactive fire and atmosphere. And Jared and the team could see their movie for the first time in a rough, but still very beautiful rough state. That was invaluable. DS: If you had to key on anything, what would say with the biggest challenges for your teams on the film? You're laughing. I can hear you thinking, “Do we have an hour?”  KE: Where do you begin?  SS: Exactly. It's so hard to really single one out. And I struggle with that question every time I've been asked that question. KE: I’ll start.  I've got a very simple practical answer and then a larger one, something that was new to us, kind of similar to what we were just talking about. The simple practical one is the Piglins square feet with no ankles. It was very tough to make them walk realistically. Think of the leg of a chair. How do you make that roll and bank and bend because there is no joint? There are a lot of Piglins walking on surfaces and it was a very difficult conundrum to solve. It took a lot of hard work from our motion edit team and our animation team to get those things walking realistically. You know, it’s doing that simple thing that you don't usually pay attention to. So that was one reasonably big challenge that is often literally buried in the shadows. The bigger one was something that was new to me. We often do a lot of our previs and postvis in-house and then finish the shots. And just because of circumstances and capacity, we did the postvis for the entire final battle, but we ended up sharing the sequence with Digital Domain, who did an amazing job completing some of the stuff on the Battlefield we did post on. For me personally, I've never experienced not finishing what I started. But it was also really rewarding to see how well the work we had put in was honored by DD when they took it over.   SS: I think the biggest challenge and the biggest achievement that I'm most proud of is really ending up with something that was well received by the wider audience. Of creating these two worlds, this sort of abstract adaptation of the Minecraft game and combining it with live-action. That was the achievement for me. That was the biggest challenge. We were all nervous from day one. And we continued to be nervous up until the day the movie came out. None of us really knew how it ultimately would be received. The fact that it came together and was so well received is a testament to everyone doing a fantastic job. And that's what I'm incredibly proud of. Dan Sarto is Publisher and Editor-in-Chief of Animation World Network. #minecraft #movie #wētā #helps #adapt
    WWW.AWN.COM
    ‘A Minecraft Movie’: Wētā FX Helps Adapt an Iconic Game One Block at a Time
    Adapting the iconic, block-based design aesthetic of Mojang’s beloved Minecraft videogame into the hit feature film comedy adventure, The Minecraft Movie, posed an enormous number of hurdles for director Jared Hess and Oscar-winning Production VFX Supervisor Dan Lemmon. Tasked with helping translate the iconic pixelated world into something cinematically engaging, while remaining true to its visual DNA, was Wētā FX, who delivered 450 VFX shots on the film. And two of their key leads on the film were VFX Supervisor Sheldon Stopsack and Animation Supervisor Kevin Estey.  But the shot count merely scratches the surface of the extensive work the studio performed. Wētā led the design and creation of The Overworld, 64 unique terrains spanning deserts, lush forests, oceans, and mountain ranges, all combined into one continuous environment, assets that were also shared with Digital Domain for their work on the 3rd act battle. Wētā also handled extensive work on the lava-filled hellscape of The Nether that involved Unreal Engine for early representations used in previs, scene scouting, and onset during principal photography, before refining the environment during post-production. They also dressed The Nether with lava, fire, and torches, along with atmospherics and particulate like smoke, ash, and embers. But wait… there’s more! The studio’s Art Department, working closely with Hess, co-created the look and feel of all digital characters in the film. For Malgosha’s henchmen, the Piglins, Wētā designed and created 12 different variants, all with individual characteristics and personalities. They also designed sheep, bees, pandas, zombies, skeletons, and lovable wolf Dennis. Many of these characters were provided to other vendors for their work on the film. Needless to say, the studio truly became a “Master Builder” on the show. The film is based on the hugely popular game Minecraft, first released by Sweden’s Mojang Studios in 2011 and purchased by Microsoft for $2.5 billion in 2014, which immerses players in a low-res, pixelated “sandbox” simulation where they can use blocks to build entire worlds.  Here's the final trailer: In a far-ranging interview, Stopsack and Estey shared with AWN a peek into their creative process, from early design exploration to creation of an intricate practical cloak for Malgosha and the use of Unreal Engine for previs, postvis, and real-time onset visualization. Dan Sarto: The film is filled with distinct settings and characters sporting various “block” styled features. Can you share some of the work you did on the environments, character design, and character animation? Sheldon Stopsack: There's, there's so much to talk about and truth to be told, if you were to touch on everything, we would probably need to spend the whole day together.  Kevin Estey: Sheldon and I realized that when we talk about the film, either amongst ourselves or with someone else, we could just keep going, there are so many stories to tell. DS: Well, start with The Overworld and The Nether. How did the design process begin? What did you have to work with? SS: Visual effects is a tricky business, you know. It's always difficult. Always challenging. However, Minecraft stood out to us as not your usual quote unquote standard visual effects project, even though as you know, there is no standard visual effects project because they're all somehow different. They all come with their own creative ideas, inspirations, and challenges. But Minecraft, right from the get-go, was different, simply by the fact that when you first consider the idea of making such a live-action movie, you instantly ask yourself, “How do we make this work? How do we combine these two inherently very, very different but unique worlds?” That was everyone’s number one question. How do we land this? Where do we land this? And I don't think that any of us really had an answer, including our clients, Dan Lemmon [Production VFX Supervisor] and Jared Hess [the film’s director]. Everyone was really open for this journey. That's compelling for us, to get out of our comfort zone. It makes you nervous because there are no real obvious answers. KE: Early on, we seemed to thrive off these kinds of scary creative challenges. There were lots of question marks. We had many moments when we were trying to figure out character designs. We had a template from the game, but it was an incredibly vague, low-resolution template. And there were so many ways that we could go. But that design discovery throughout the project was really satisfying.  DS: Game adaptations are never simple. There usually isn’t much in the way of story. But with Minecraft, from a visual standpoint, how did you translate low res, block-styled characters into something entertaining that could sustain a 100-minute feature film? SS: Everything was a question mark. Using the lava that you see in The Nether as one example, we had beautiful concept art for all our environments, The Overworld and The Nether, but those concepts only really took you this far. They didn’t represent the block shapes or give you a clear answer of like how realistic some of those materials, shapes and structures would be. How organic would we go? All of this needed to be explored. For the lava, we had stylized concept pieces, with block shaped viscosity as it flowed down. But we spent months with our effects team, and Dan and Jared, just riffing on ideas. We came full circle, with the lava ending up being more realistic, a naturally viscous liquid based on real physics. And the same goes with the waterfall that you see in the Overworld.  The question is, how far do we take things into the true Minecraft representation of things? How much do we scale back a little bit and ground ourselves in reality, with effects we’re quite comfortable producing as a company? There's always a tradeoff to find that balance of how best to combine what’s been filmed, the practical sets and live-action performances, with effects. Where’s the sweet spot? What's the level of abstraction? What's honest to the game? As much as some call Minecraft a simple game, it isn't simple, right? It's incredibly complex. It's got a set of rules and logic to the world building process within the game that we had to learn, adapt, and honor in many ways. When our misfits first arrive and we have these big vistas and establishing shots, when you really look at it, you, you recognize a lot of the things that we tried to adapt from the game. There are different biomes, like the Badlands, which is very sand stoney; there's the Woodlands, which is a lush environment with cherry blossom trees; you’ve got the snow biome with big mountains in the background. Our intent was to honor the game. KE: I took a big cue from a lot of the early designs, and particularly the approach that Jared liked for the characters and to the design in general, which was maintaining the stylized, blocky aesthetic, but covering them in realistic flesh, fur, things that were going to make them appear as real as possible despite the absolutely unreal designs of their bodies. And so essentially, it was squared skeleton… squarish bones with flesh and realistic fur laid over top. We tried various things, all extremely stylized. The Creepers are a good example. We tried all kinds of ways for them to explode. Sheldon found a great reference for a cat coughing up a hairball. He was nice to censor the worst part of it, but those undulations in the chest and ribcage… Jared spoke of the Creepers being basically tragic characters that only wanted to be loved, to just be close to you. But sadly, whenever they did, they’d explode. So, we experimented with a lot of different motions of how they’d explode. DS: Talk about the process of determining how these characters would move. None seem to have remotely realistic proportions in their limbs, bodies, or head size. KE: There were a couple things that Jared always seemed to be chasing. One was just something that would make him laugh. Of course, it had to sit within the bounds of how a zombie might move, or a skeleton might move, as we were interpreting the game. But the main thing was just, was it fun and funny? I still remember one of the earliest gags they came up with in mocap sessions, even before I even joined the show, was how the zombies get up after they fall over. It was sort of like a tripod, where its face and feet were planted and its butt shoots up in the air. After a lot of experimentation, we came up with basic personality types for each character. There were 12 different types of Piglins. The zombies were essentially like you're coming home from the pub after a few too many pints and you're just trying to get in the door, but you can't find your keys. Loose, slightly inebriated movement. The best movement we found for the skeletons was essentially like an old man with rigid limbs and lack of ligaments that was chasing kids off his lawn. And so, we created this kind of bible of performance types that really helped guide performers on the mocap stage and animators later on. SS: A lot of our exploration didn’t stick. But Jared was the expert in all of this. He always came up with some quirky last-minute idea.  KE: My favorite from Jared came in the middle of one mocap shoot. He walked up to me and said he had this stupid idea. I said OK, go on. He said, what if Malgosha had these two little pigs next to her, like Catholic alter boys [the thurifers], swinging incense [a thurible]. Can we do that? I talked to our stage manager, and we quickly put together a temporary prop for the incense burners. And we got two performers who just stood there. What are they going to do? Jared said, “Nothing. Just stand there and swing. I think it would look funny.” So, that’s what we did.  We dubbed them the Priesty Boys. And they are there throughout the film. That was amazing about Jared. He was always like, let's just try it, see if it works. Otherwise ditch it. DS: Tell me about your work on Malgosha. And I also want to discuss your use of Unreal Engine and the previs and postvis work.  SS: For Malgosha as a character, our art department did a phenomenal job finding the character design at the concept phase. But it was a collective effort. So many contributors were involved in her making. And I'm not just talking about the digital artists here on our side. It was a joint venture of different people having different explorations and experiments. It started off with the concept work as a foundation, which we mocked up with 3D sketches before building a model. But with Malgosha, we also had the costume department on the production side building this elaborate cloak. Remember, that cloak kind of makes 80, 85% of her appearance. It's almost like a character in itself, the way we utilized it. And the costume department built this beautiful, elaborate, incredibly intricate, practical version of it that we intended to use on set for the performer to wear. It ended up being too impractical because it was too heavy. But it was beautiful. So, while we didn't really use it on set, it gave us something physically to kind of incorporate into our digital version. KE: Alan Henry is the motion performer who portrayed her on set and on the mocap stage. I've known him for close to 15 years. I started working with him on The Hobbit films. He was a stunt performer who eventually rolled into doing motion capture with us on The Hobbit. He’s an incredible actor and absolutely hilarious and can adapt to any sort of situation. He’s so improvisational. He came up with an approach to Malgosha very quickly. Added a limp so that she felt decrepit, leaning on the staff, adding her other arm as kind of like a gimp arm that she would point and gesture with.   Even though she’s a blocky character, her anatomy is very much a biped, with rounder limbs than the other Piglins. She's got hooves, is somewhat squarish, and her much more bulky mass in the middle was easier to manipulate and move around. Because she would have to battle with Steve in the end, she had to have a level of agility that even some of the Piglins didn't have. DS: Did Unreal Engine come into play with her?  SS: Unreal was used all the way through the project. Dan Lemmon and his team early on set up their own virtual art department to build representations of the Overworld and the Nether within the context of Unreal. We and Sony Imageworks tried to provide recreations of these environments that were then used within Unreal to previsualize what was happening on set during shooting of principal photography. And that's where our mocap and on-set teams were coming into play. Effects provided what we called the Nudge Cam. It was a system to do real-time tracking using a stereo pair of Basler computer vision cameras that were mounted onto the sides of the principal camera. We provided the live tracking that was then composited in real time with the Unreal Engine content that all the vendors had provided. It was a great way of utilizing Unreal to give the camera operators or DOP, even Jared, a good sense of what we would actually shoot. It gave everyone a little bit of context for the look and feel of what you could actually expect from these scenes.  Because we started this journey with Unreal having onset in mind, we internally decided, look, let's take this further. Let's take this into post-production as well. What would it take to utilize Unreal for shot creation? And it was really exclusively used on the Nether environment. I don’t want to say we used it for matte painting replacement. We used it more for say, let's build this extended environment in Unreal. Not only use it as a render engine with this reasonably fast turnaround but also use it for what it's good at: authoring things, quickly changing things, moving columns around, manipulating things, dressing them, lighting them, and rendering them. It became sort of a tool that we used in place of a traditional matte painting for the extended environments. KE: Another thing worth mentioning is we were able to utilize it on our mocap stage as well during the two-week shoot with Jared and crew. When we shoot on the mocap stage, we get a very simple sort of gray shaded diagnostic grid. You have your single-color characters that sometimes are textured, but they’re fairly simple without any context of environment. Our special projects team was able to port what we usually see in Giant, the software we use on the mocap stage, into Unreal, which gave us these beautifully lit environments with interactive fire and atmosphere. And Jared and the team could see their movie for the first time in a rough, but still very beautiful rough state. That was invaluable. DS: If you had to key on anything, what would say with the biggest challenges for your teams on the film? You're laughing. I can hear you thinking, “Do we have an hour?”  KE: Where do you begin?  SS: Exactly. It's so hard to really single one out. And I struggle with that question every time I've been asked that question. KE: I’ll start.  I've got a very simple practical answer and then a larger one, something that was new to us, kind of similar to what we were just talking about. The simple practical one is the Piglins square feet with no ankles. It was very tough to make them walk realistically. Think of the leg of a chair. How do you make that roll and bank and bend because there is no joint? There are a lot of Piglins walking on surfaces and it was a very difficult conundrum to solve. It took a lot of hard work from our motion edit team and our animation team to get those things walking realistically. You know, it’s doing that simple thing that you don't usually pay attention to. So that was one reasonably big challenge that is often literally buried in the shadows. The bigger one was something that was new to me. We often do a lot of our previs and postvis in-house and then finish the shots. And just because of circumstances and capacity, we did the postvis for the entire final battle, but we ended up sharing the sequence with Digital Domain, who did an amazing job completing some of the stuff on the Battlefield we did post on. For me personally, I've never experienced not finishing what I started. But it was also really rewarding to see how well the work we had put in was honored by DD when they took it over.   SS: I think the biggest challenge and the biggest achievement that I'm most proud of is really ending up with something that was well received by the wider audience. Of creating these two worlds, this sort of abstract adaptation of the Minecraft game and combining it with live-action. That was the achievement for me. That was the biggest challenge. We were all nervous from day one. And we continued to be nervous up until the day the movie came out. None of us really knew how it ultimately would be received. The fact that it came together and was so well received is a testament to everyone doing a fantastic job. And that's what I'm incredibly proud of. Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.
    0 Comments 0 Shares
  • What AI’s impact on individuals means for the health workforce and industry

    Transcript    
    PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.”      
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.     The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak.
    You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues.
    So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.  
    To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar.
    Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence.
    Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics.
    Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society.Here is my interview with Ethan Mollick:
    LEE: Ethan, welcome.
    ETHAN MOLLICK: So happy to be here, thank you.
    LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine.So to get started, how and why did it happen that you’ve become one of the leading experts on AI?
    MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I wasmy PhD at MIT, I worked with Marvin Minskyand the MITMedia Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it.
    And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field.
    And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start.So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question.
    LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been?
    MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now.
    One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things.
    And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever.
    So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology.
    LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty?
    MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect.
    So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better. So I think that these are not as foreign concepts, especially as medicine continues to get more complicated.
    LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI.
    MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system.
    There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing?
    The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some wayscompared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way.
    The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind.
    LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention.
    MOLLICK: Yes.
    LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot, the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point?
    MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right.
    I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?”
    So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right.
    LEE: Yes. Mm-hmm.
    MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either.
    LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever?
    MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them.And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered.
    You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete.
    What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one.
    Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet.
    LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this. 
    MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills.
    Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely.
    But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety.
    LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company.
    And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs?
    MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right.
    So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains.
    And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result.
    LEE: You know, where are those productivity gains going, then, when you get to the organizational level?
    MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right.
    Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal.
    At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen.
    So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons.
    And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves.
    So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change.
    LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI?
    MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again.
    What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field.
    So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab.
    So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how toAI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill?
    And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves.
    LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones.
    And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ?
    MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish.
    I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space.
    But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things?
    And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to.
    So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that.
    LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching?
    MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful.
    A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing.
    So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right.
    I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear.
    But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition.
    LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.”MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize.LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading?
    MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems.
    So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is.
    But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI, and it had a good overview …
    LEE: Yeah, that’s a great one.
    MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I thinkKarpathyhas some really nice videos of use that I would recommend.
    Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works.
    LEE: Yeah.
    MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right.
    LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here.
    Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCMEaccrediting body, what’s the one thing you would want them to really internalize?
    MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which, “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine.
    I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast.
    So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right.
    We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here.
    LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer.
    MOLLICK: Yes. Yes.
    LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall.
    But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea?
    MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.”Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right.
    There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right.
    LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens?
    MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people.
    So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine.
    But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point.
    Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not.
    Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get?
    LEE: Yeah.
    MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything.
    Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right.
    And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it.
    LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining.
    MOLLICK: Thank you.  
    I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work.
    One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does.
    In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate. Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI.
    The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI.
    Here’s now my interview with Azeem Azhar:
    LEE: Azeem, welcome.
    AZEEM AZHAR: Peter, thank you so much for having me. 
    LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before.
    And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day?
    AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip …
    LEE: Oh wow.
    AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started.
    And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large.
    LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through?
    AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed.
    Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th.
    And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold.
    LEE: And who’s the we that you were experimenting with?
    AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar,or they walk into our virtual team room, and we try to solve problems.
    LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.  
    And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine?
    AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that ismore broad than that.
    So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right?They’re on the tablet computers, and they’re scribing away.
    And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back, right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload.
    And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help.
    So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced.
    So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized.
    LEE: Yeah.
    AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura.
    LEE: Yup.
    AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to.
    And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health, which, I mean, does physicals, MRI scans, and blood tests, and so on.
    It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector.
    And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks.But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout.
    So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems
    LEE: I love how you break that down. And I want to press on a couple of things.
    You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated?
    AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example.
    In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different.
    I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away.
    LEE: Yeah.
    AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week.
    LEE: Right. Yeah.
    AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer.
    LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution.
    Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons.
    And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work?
    AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice.
    I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors.
    I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner.
    LEE: Yeah.
    AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly.
    LEE: Right.
    AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful.
    LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis.
    And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem?
    AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz aroundthe hospital faster and faster than ever before.
    We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right?
    LEE: Yeah, yeah.
    AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system.
    So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later …
    LEE: Right.
    AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for.
    And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system …
    LEE: Yup.
    AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that.
    So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible.
    And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Carewas one, and Narayana Hrudayalayawas another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons.
    LEE: Yeah, yep.
    AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own.
    LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop?
    AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold.
    If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system.
    LEE: Right. Yep. Yep.
    AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time.
    LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you.
    AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician.
    In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart
    I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows.Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that.
    LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time.AZHAR: Yeah, yeah. Thank god for Clippy. Yes.
    LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself, about regulation. And we made some minor commentary on that.
    And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway.
    AZHAR: Right.
    LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like?
    AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs, and there are the classic set of processes you go through.
    You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience.
    So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly.
    So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots.
    LEE: Yes.
    AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval.
    I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be.LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth?
    AZHAR: Right.
    LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow.
    AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week.
    And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician.
    LEE: Yeah.
    AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right.LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah.
    AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers …
    LEE: Yes.
    AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next.
    LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this.
    And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions?
    AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in.
    LEE: OK.AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches.
    And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time.
    LEE: Yes.
    AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety.
    And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines.
    I mean, when you think about being an athlete, which is something I think about, but I could never ever do,but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health.
    LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said.
    Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much.
    AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you.  
    I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies.
    In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.  
    Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear.
    Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level.
    Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build, which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference.
    But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing.A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in.
    Until next time.
    #what #ais #impact #individuals #means
    What AI’s impact on individuals means for the health workforce and industry
    Transcript     PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.”       This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.     The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak. You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues. So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.   To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar. Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence. Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics. Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society.Here is my interview with Ethan Mollick: LEE: Ethan, welcome. ETHAN MOLLICK: So happy to be here, thank you. LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine.So to get started, how and why did it happen that you’ve become one of the leading experts on AI? MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I wasmy PhD at MIT, I worked with Marvin Minskyand the MITMedia Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it. And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field. And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start.So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question. LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been? MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now. One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things. And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever. So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology. LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty? MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect. So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better. So I think that these are not as foreign concepts, especially as medicine continues to get more complicated. LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI. MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system. There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing? The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some wayscompared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way. The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind. LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention. MOLLICK: Yes. LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot, the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point? MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right. I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?” So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right. LEE: Yes. Mm-hmm. MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either. LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever? MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them.And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered. You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete. What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one. Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet. LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this.  MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills. Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely. But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety. LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company. And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs? MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right. So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains. And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result. LEE: You know, where are those productivity gains going, then, when you get to the organizational level? MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right. Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal. At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen. So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons. And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves. So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change. LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI? MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again. What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field. So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab. So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how toAI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill? And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves. LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones. And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ? MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish. I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space. But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things? And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to. So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that. LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching? MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful. A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing. So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right. I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear. But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition. LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.”MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize.LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading? MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems. So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is. But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI, and it had a good overview … LEE: Yeah, that’s a great one. MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I thinkKarpathyhas some really nice videos of use that I would recommend. Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works. LEE: Yeah. MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right. LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here. Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCMEaccrediting body, what’s the one thing you would want them to really internalize? MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which, “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine. I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast. So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right. We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here. LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer. MOLLICK: Yes. Yes. LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall. But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea? MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.”Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right. There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right. LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens? MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people. So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine. But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point. Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not. Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get? LEE: Yeah. MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything. Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right. And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it. LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining. MOLLICK: Thank you.   I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work. One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does. In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate. Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI. The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI. Here’s now my interview with Azeem Azhar: LEE: Azeem, welcome. AZEEM AZHAR: Peter, thank you so much for having me.  LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before. And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day? AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip … LEE: Oh wow. AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started. And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large. LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through? AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed. Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th. And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold. LEE: And who’s the we that you were experimenting with? AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar,or they walk into our virtual team room, and we try to solve problems. LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.   And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine? AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that ismore broad than that. So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right?They’re on the tablet computers, and they’re scribing away. And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back, right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload. And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help. So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced. So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized. LEE: Yeah. AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura. LEE: Yup. AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to. And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health, which, I mean, does physicals, MRI scans, and blood tests, and so on. It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector. And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks.But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout. So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems LEE: I love how you break that down. And I want to press on a couple of things. You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated? AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example. In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different. I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away. LEE: Yeah. AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week. LEE: Right. Yeah. AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer. LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution. Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons. And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work? AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice. I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors. I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner. LEE: Yeah. AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly. LEE: Right. AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful. LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis. And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem? AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz aroundthe hospital faster and faster than ever before. We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right? LEE: Yeah, yeah. AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system. So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later … LEE: Right. AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for. And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system … LEE: Yup. AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that. So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible. And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Carewas one, and Narayana Hrudayalayawas another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons. LEE: Yeah, yep. AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own. LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop? AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold. If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system. LEE: Right. Yep. Yep. AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time. LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you. AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician. In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows.Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that. LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time.AZHAR: Yeah, yeah. Thank god for Clippy. Yes. LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself, about regulation. And we made some minor commentary on that. And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway. AZHAR: Right. LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like? AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs, and there are the classic set of processes you go through. You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience. So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly. So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots. LEE: Yes. AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval. I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be.LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth? AZHAR: Right. LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow. AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week. And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician. LEE: Yeah. AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right.LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah. AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers … LEE: Yes. AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next. LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this. And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions? AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in. LEE: OK.AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches. And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time. LEE: Yes. AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety. And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines. I mean, when you think about being an athlete, which is something I think about, but I could never ever do,but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health. LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said. Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much. AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you.   I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies. In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.   Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear. Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level. Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build, which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference. But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing.A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in. Until next time. #what #ais #impact #individuals #means
    WWW.MICROSOFT.COM
    What AI’s impact on individuals means for the health workforce and industry
    Transcript [MUSIC]    [BOOK PASSAGE]  PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.” [END OF BOOK PASSAGE]    [THEME MUSIC]    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.      [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak. You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues. So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.   To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar. Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence. Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics. Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society. [TRANSITION MUSIC] Here is my interview with Ethan Mollick: LEE: Ethan, welcome. ETHAN MOLLICK: So happy to be here, thank you. LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine. [LAUGHTER] So to get started, how and why did it happen that you’ve become one of the leading experts on AI? MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I was [getting] my PhD at MIT, I worked with Marvin Minsky (opens in new tab) and the MIT [Massachusetts Institute of Technology] Media Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it. And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field. And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start. [LAUGHTER] So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question. LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been? MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now. One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things. And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever. So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology. LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty? MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect. So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better (opens in new tab). So I think that these are not as foreign concepts, especially as medicine continues to get more complicated. LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI. MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system. There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing? The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some ways [LAUGHTER] compared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way. The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind. LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention. MOLLICK: Yes. LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot (opens in new tab), the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point? MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right. I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?” So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right. LEE: Yes. Mm-hmm. MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either. LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever? MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them. [LAUGHTER] And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered. You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete. What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one. Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet. LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this.  MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills. Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely. But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety. LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company. And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs? MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right. So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains. And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result. LEE: You know, where are those productivity gains going, then, when you get to the organizational level? MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right. Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal. At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen. So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons. And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves. So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change. LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI? MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again. What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field. So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab. So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how to [get] AI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill? And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves. LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones. And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ? MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish. I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space. But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things? And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to. So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that. LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching? MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful. A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing. So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right. I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear. But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition. LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.” [LAUGHS] MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize. [LAUGHTER] LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading? MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems. So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is. But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI (opens in new tab), and it had a good overview … LEE: Yeah, that’s a great one. MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I think [Andrej] Karpathy (opens in new tab) has some really nice videos of use that I would recommend. Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works. LEE: Yeah. MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right. LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here. Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCME [Liaison Committee on Medical Education] accrediting body, what’s the one thing you would want them to really internalize? MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which [is], “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine. I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast. So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right. We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here. LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer. MOLLICK: Yes. Yes. LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall. But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea? MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.” [LAUGHTER] Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right. There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right. LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens? MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people. So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine. But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point. Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not. Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get? LEE: Yeah. MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything. Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right. And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it. LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining. MOLLICK: Thank you. [TRANSITION MUSIC]   I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work. One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does. In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate (opens in new tab). Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI. The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI. Here’s now my interview with Azeem Azhar: LEE: Azeem, welcome. AZEEM AZHAR: Peter, thank you so much for having me.  LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before. And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day? AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip … LEE: Oh wow. AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started. And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large. LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through? AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed. Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th. And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold. LEE: And who’s the we that you were experimenting with? AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar, [LAUGHTER] or they walk into our virtual team room, and we try to solve problems. LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.   And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine? AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that is [LAUGHS] more broad than that. So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right? [LAUGHTER] They’re on the tablet computers, and they’re scribing away. And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back (opens in new tab), right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload. And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help. So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced. So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized. LEE: Yeah. AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura (opens in new tab). LEE: Yup. AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to. And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health (opens in new tab), which, I mean, does physicals, MRI scans, and blood tests, and so on. It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector. And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks. [LAUGHTER] But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout. So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems LEE: I love how you break that down. And I want to press on a couple of things. You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated? AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example. In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different. I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away. LEE: Yeah. AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week. LEE: Right. Yeah. AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer. LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution. Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons. And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work? AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice. I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors. I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner. LEE: Yeah. AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly. LEE: Right. AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful. LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis. And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem? AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz around [LAUGHTER] the hospital faster and faster than ever before. We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right? LEE: Yeah, yeah. AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system. So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later … LEE: Right. AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for. And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system … LEE: Yup. AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that. So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible. And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Care (opens in new tab) was one, and Narayana Hrudayalaya [now known as Narayana Health (opens in new tab)] was another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons. LEE: Yeah, yep. AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own. LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop? AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold. If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system. LEE: Right. Yep. Yep. AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time. LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you. AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician. In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows. [LAUGHS] Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that. LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time. [LAUGHS] AZHAR: Yeah, yeah. Thank god for Clippy. Yes. LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself [LAUGHS], about regulation. And we made some minor commentary on that. And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway. AZHAR: Right. LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like? AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs [randomized control trials], and there are the classic set of processes you go through. You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience. So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly. So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots [very rapidly]. LEE: Yes. AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval. I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be. [LAUGHTER] LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth? AZHAR: Right. LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow. AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week. And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician. LEE: Yeah. AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right. [LAUGHTER] LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah. AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers … LEE: Yes. AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM [continuous glucose monitor]. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next. LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this. And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions? AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in. LEE: OK. [LAUGHS] AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches. And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time. LEE: Yes. AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety. And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines. I mean, when you think about being an athlete, which is something I think about, but I could never ever do, [LAUGHTER] but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health. LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said. Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much. AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you. [TRANSITION MUSIC]   I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies. In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.   Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear. Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level. Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build (opens in new tab), which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference. But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing. [THEME MUSIC] A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in. Until next time. [MUSIC FADES]
    11 Comments 0 Shares
  • Fortnite Criticized For Use Of AI Darth Vader, Cyberpunk 2077 Sequel Will Introduce New City, And More Top Stories

    Start SlideshowStart SlideshowScreenshot: Cilvanis / YouTube, Naughty Dog / Kotaku, Ubisoft / Kotaku, Epic / Lucasfilm / Kotaku, Image: Atlus, Wizkids / Reddit / Larian Studios, The Pokémon Company / Kotaku, CD Projekt RED, Sandfall Interactive, GameStop / KotakuThis week saw Fortnite targeted by SAG-AFTRA for its use of an AI-powered Darth Vader voice that mimics that of the late James Earl Jones. Also, the folks behind Assassin’s Creed Shadows told us why they opted not to let you kill animals in the open-world adventure, fans of Clair Obscur react to the trollish behavior of the game’s enemies, and Neil Druckmann is once again explaining stuff about the world of The Last of Us that some fans, at least—our writer included--think would be better left ambiguous.Previous SlideNext SlideList slidesPlayers Are Obsessed With How Clair Obscur: Expedition 33's Enemies Keep Trolling ThemScreenshot: Cilvanis / YouTubeOne of Clair Obscur: Expedition 33's big innovations is adding a dodge, parry, and counter system to its otherwise traditional turn-based battles. It’s a clever tweak that helps keep combat engrossing for its 30+ hour journey and also an incredible opportunity for the game’s developers to troll the crap out of players. - Ethan Gach Read MorePrevious SlideNext SlideList slidesI Wish Neil Druckmann Would Stop Confirming Things About The Last Of UsScreenshot: Naughty Dog / KotakuYou might not know it based on my scathing recaps of The Last of Us’ second season, but I love this series. I love the moral conundrums it presents, the violent grief it depicts, and the games’ excellent writing that poignantly brings all of those complicated emotions to the surface. What I don’t like is listening to pretty much any of the creative team talk about the series, especially when it comes to weighing in on decade-long discourse around its complex storylines. Even when I agree with series director Neil Druckmann’s interpretation of something, we’d all rather he just let bad readings fester in the corners of the internet than tell us exactly what something means. Nevertheless, he continues to do so in interviews. - Kenneth Shepard Read MorePrevious SlideNext SlideList slidesUbisoft Explains Why You Can’t Kill Animals In Assassin’s Creed ShadowsScreenshot: Ubisoft / KotakuAssassin’s Creed Shadows is a very good game that animal lovers can enjoy because there’s no way to harm a single creature in the game. That’s a first for the franchise and I wanted to learn why Ubisoft went this route for its latest open-world adventure. - Zack Zwiezen Read MorePrevious SlideNext SlideList slidesXbox Game Pass Is Getting Too Many Cool Games This MonthImage: AtlusXbox Game Pass has been killing it this year and May is especially packed. The subscription library is getting a load of cool indies as well as 2024 GOTY contender Metaphor: ReFantazio. That’s on top of all of the heavy hitters that already arrived earlier in the month. There is, quiet simply, no time to play them all. - Ethan Gach Read MorePrevious SlideNext SlideList slidesBaldur’s Gate 3 Figures Are So Ugly Fans Are Getting Full RefundsImage: Wizkids / Reddit / Larian Studios WizKids announced a new collection of Baldur’s Gate 3 miniatures last fall that featured Karlach, Gale, Shadowheart, and other memorably party members from the hit 2023 Dungeon & Dragons-based RPG. The box set has since been released and the figurines look so bad fans are being promised their money back. - Ethan Gach Read MorePrevious SlideNext SlideList slidesThere’s Something Very Suspicious About These New Pokémon PlushiesImage: The Pokémon Company / KotakuRefreshing the Pokémon Center to see what new items have been added each day has become something of an obsession for me. The site adds new stock so incredibly frequently as to be constantly astonishing, and today is no different. The latest arrivals on the store are a new collection of plushies that feature Ditto in 22 new disguises. And they are adorable. - John Walker Read MorePrevious SlideNext SlideList slidesFortnite In Legal Trouble After Adding AI Darth VaderScreenshot: Epic / Lucasfilm / KotakuSAG-AFTRA, the massive actors and media union with over 160,000 members, has filed an unfair labor practice charge with the National Labor Relations Board against Epic Games over its inclusion of an AI-powered Darth Vader in a recent Fortnite update. - Zack Zwiezen Read MorePrevious SlideNext SlideList slidesCyberpunk 2077’s Sequel Will Return To Night City, As Well As Take Us Somewhere NewImage: CD Projekt REDWe still don’t know much about the Cyberpunk 2077 sequel currently in the works at CD Projekt Red. Development on the RPG, code-named “Orion,” is in full swing after the studio wrapped support for the original game last year, but the team is still keeping most details about it under wraps, other than a few informal quotes here and there about the vibe it’s trying to capture. However, Mike Pondsmith, the creator of the Cyberpunk tabletop roleplaying game, which first debuted in 1988, has revealed a pretty important piece of information: Alongside returning to the capitalist hellscape of Night City, the sequel will take us to another city as well. - Kenneth Shepard Read MorePrevious SlideNext SlideList slidesClair Obscur: Expedition 33 Promises Fresh Round Of Collector’s Editions As Originals Resell For Over Image: Sandfall InteractiveRPG fans love their Collector’s Editions, but few guessed just how big or good Clair Obscur: Expedition 33 would turn out to be, including its own developers. The result was that a very limited set of physical releases, including Collector’s Editions exclusive to certain retailers, immediately vanished from store shelves. People are now trying to resell them for as much as on eBay, but fortunately developer Sandfall Interactive has just announced it’s making more. - Ethan Gach Read MorePrevious SlideNext SlideList slidesGameStop Is Selling A Ton Of Big Games For Just Image: GameStop / KotakuGameStop must be trying to clear out some space, because the national video game retailer is selling a huge assortment of AAA games, remakes, and recent releases for and as part of a new sale. So why not take advantage of this corporate clean-up and grab some big games for less than half the normal price? - Zack Zwiezen Read More
    #fortnite #criticized #use #darth #vader
    Fortnite Criticized For Use Of AI Darth Vader, Cyberpunk 2077 Sequel Will Introduce New City, And More Top Stories
    Start SlideshowStart SlideshowScreenshot: Cilvanis / YouTube, Naughty Dog / Kotaku, Ubisoft / Kotaku, Epic / Lucasfilm / Kotaku, Image: Atlus, Wizkids / Reddit / Larian Studios, The Pokémon Company / Kotaku, CD Projekt RED, Sandfall Interactive, GameStop / KotakuThis week saw Fortnite targeted by SAG-AFTRA for its use of an AI-powered Darth Vader voice that mimics that of the late James Earl Jones. Also, the folks behind Assassin’s Creed Shadows told us why they opted not to let you kill animals in the open-world adventure, fans of Clair Obscur react to the trollish behavior of the game’s enemies, and Neil Druckmann is once again explaining stuff about the world of The Last of Us that some fans, at least—our writer included--think would be better left ambiguous.Previous SlideNext SlideList slidesPlayers Are Obsessed With How Clair Obscur: Expedition 33's Enemies Keep Trolling ThemScreenshot: Cilvanis / YouTubeOne of Clair Obscur: Expedition 33's big innovations is adding a dodge, parry, and counter system to its otherwise traditional turn-based battles. It’s a clever tweak that helps keep combat engrossing for its 30+ hour journey and also an incredible opportunity for the game’s developers to troll the crap out of players. - Ethan Gach Read MorePrevious SlideNext SlideList slidesI Wish Neil Druckmann Would Stop Confirming Things About The Last Of UsScreenshot: Naughty Dog / KotakuYou might not know it based on my scathing recaps of The Last of Us’ second season, but I love this series. I love the moral conundrums it presents, the violent grief it depicts, and the games’ excellent writing that poignantly brings all of those complicated emotions to the surface. What I don’t like is listening to pretty much any of the creative team talk about the series, especially when it comes to weighing in on decade-long discourse around its complex storylines. Even when I agree with series director Neil Druckmann’s interpretation of something, we’d all rather he just let bad readings fester in the corners of the internet than tell us exactly what something means. Nevertheless, he continues to do so in interviews. - Kenneth Shepard Read MorePrevious SlideNext SlideList slidesUbisoft Explains Why You Can’t Kill Animals In Assassin’s Creed ShadowsScreenshot: Ubisoft / KotakuAssassin’s Creed Shadows is a very good game that animal lovers can enjoy because there’s no way to harm a single creature in the game. That’s a first for the franchise and I wanted to learn why Ubisoft went this route for its latest open-world adventure. - Zack Zwiezen Read MorePrevious SlideNext SlideList slidesXbox Game Pass Is Getting Too Many Cool Games This MonthImage: AtlusXbox Game Pass has been killing it this year and May is especially packed. The subscription library is getting a load of cool indies as well as 2024 GOTY contender Metaphor: ReFantazio. That’s on top of all of the heavy hitters that already arrived earlier in the month. There is, quiet simply, no time to play them all. - Ethan Gach Read MorePrevious SlideNext SlideList slidesBaldur’s Gate 3 Figures Are So Ugly Fans Are Getting Full RefundsImage: Wizkids / Reddit / Larian Studios WizKids announced a new collection of Baldur’s Gate 3 miniatures last fall that featured Karlach, Gale, Shadowheart, and other memorably party members from the hit 2023 Dungeon & Dragons-based RPG. The box set has since been released and the figurines look so bad fans are being promised their money back. - Ethan Gach Read MorePrevious SlideNext SlideList slidesThere’s Something Very Suspicious About These New Pokémon PlushiesImage: The Pokémon Company / KotakuRefreshing the Pokémon Center to see what new items have been added each day has become something of an obsession for me. The site adds new stock so incredibly frequently as to be constantly astonishing, and today is no different. The latest arrivals on the store are a new collection of plushies that feature Ditto in 22 new disguises. And they are adorable. - John Walker Read MorePrevious SlideNext SlideList slidesFortnite In Legal Trouble After Adding AI Darth VaderScreenshot: Epic / Lucasfilm / KotakuSAG-AFTRA, the massive actors and media union with over 160,000 members, has filed an unfair labor practice charge with the National Labor Relations Board against Epic Games over its inclusion of an AI-powered Darth Vader in a recent Fortnite update. - Zack Zwiezen Read MorePrevious SlideNext SlideList slidesCyberpunk 2077’s Sequel Will Return To Night City, As Well As Take Us Somewhere NewImage: CD Projekt REDWe still don’t know much about the Cyberpunk 2077 sequel currently in the works at CD Projekt Red. Development on the RPG, code-named “Orion,” is in full swing after the studio wrapped support for the original game last year, but the team is still keeping most details about it under wraps, other than a few informal quotes here and there about the vibe it’s trying to capture. However, Mike Pondsmith, the creator of the Cyberpunk tabletop roleplaying game, which first debuted in 1988, has revealed a pretty important piece of information: Alongside returning to the capitalist hellscape of Night City, the sequel will take us to another city as well. - Kenneth Shepard Read MorePrevious SlideNext SlideList slidesClair Obscur: Expedition 33 Promises Fresh Round Of Collector’s Editions As Originals Resell For Over Image: Sandfall InteractiveRPG fans love their Collector’s Editions, but few guessed just how big or good Clair Obscur: Expedition 33 would turn out to be, including its own developers. The result was that a very limited set of physical releases, including Collector’s Editions exclusive to certain retailers, immediately vanished from store shelves. People are now trying to resell them for as much as on eBay, but fortunately developer Sandfall Interactive has just announced it’s making more. - Ethan Gach Read MorePrevious SlideNext SlideList slidesGameStop Is Selling A Ton Of Big Games For Just Image: GameStop / KotakuGameStop must be trying to clear out some space, because the national video game retailer is selling a huge assortment of AAA games, remakes, and recent releases for and as part of a new sale. So why not take advantage of this corporate clean-up and grab some big games for less than half the normal price? - Zack Zwiezen Read More #fortnite #criticized #use #darth #vader
    KOTAKU.COM
    Fortnite Criticized For Use Of AI Darth Vader, Cyberpunk 2077 Sequel Will Introduce New City, And More Top Stories
    Start SlideshowStart SlideshowScreenshot: Cilvanis / YouTube, Naughty Dog / Kotaku, Ubisoft / Kotaku, Epic / Lucasfilm / Kotaku, Image: Atlus, Wizkids / Reddit / Larian Studios, The Pokémon Company / Kotaku, CD Projekt RED, Sandfall Interactive, GameStop / KotakuThis week saw Fortnite targeted by SAG-AFTRA for its use of an AI-powered Darth Vader voice that mimics that of the late James Earl Jones. Also, the folks behind Assassin’s Creed Shadows told us why they opted not to let you kill animals in the open-world adventure, fans of Clair Obscur react to the trollish behavior of the game’s enemies, and Neil Druckmann is once again explaining stuff about the world of The Last of Us that some fans, at least—our writer included--think would be better left ambiguous.Previous SlideNext SlideList slidesPlayers Are Obsessed With How Clair Obscur: Expedition 33's Enemies Keep Trolling ThemScreenshot: Cilvanis / YouTubeOne of Clair Obscur: Expedition 33's big innovations is adding a dodge, parry, and counter system to its otherwise traditional turn-based battles. It’s a clever tweak that helps keep combat engrossing for its 30+ hour journey and also an incredible opportunity for the game’s developers to troll the crap out of players. - Ethan Gach Read MorePrevious SlideNext SlideList slidesI Wish Neil Druckmann Would Stop Confirming Things About The Last Of UsScreenshot: Naughty Dog / KotakuYou might not know it based on my scathing recaps of The Last of Us’ second season, but I love this series. I love the moral conundrums it presents, the violent grief it depicts, and the games’ excellent writing that poignantly brings all of those complicated emotions to the surface. What I don’t like is listening to pretty much any of the creative team talk about the series, especially when it comes to weighing in on decade-long discourse around its complex storylines. Even when I agree with series director Neil Druckmann’s interpretation of something, we’d all rather he just let bad readings fester in the corners of the internet than tell us exactly what something means. Nevertheless, he continues to do so in interviews. - Kenneth Shepard Read MorePrevious SlideNext SlideList slidesUbisoft Explains Why You Can’t Kill Animals In Assassin’s Creed ShadowsScreenshot: Ubisoft / KotakuAssassin’s Creed Shadows is a very good game that animal lovers can enjoy because there’s no way to harm a single creature in the game (except for people, of course). That’s a first for the franchise and I wanted to learn why Ubisoft went this route for its latest open-world adventure. - Zack Zwiezen Read MorePrevious SlideNext SlideList slidesXbox Game Pass Is Getting Too Many Cool Games This MonthImage: AtlusXbox Game Pass has been killing it this year and May is especially packed. The subscription library is getting a load of cool indies as well as 2024 GOTY contender Metaphor: ReFantazio. That’s on top of all of the heavy hitters that already arrived earlier in the month. There is, quiet simply, no time to play them all. - Ethan Gach Read MorePrevious SlideNext SlideList slidesBaldur’s Gate 3 Figures Are So Ugly Fans Are Getting Full RefundsImage: Wizkids / Reddit / Larian Studios WizKids announced a new collection of Baldur’s Gate 3 miniatures last fall that featured Karlach, Gale, Shadowheart, and other memorably party members from the hit 2023 Dungeon & Dragons-based RPG. The $50 box set has since been released and the figurines look so bad fans are being promised their money back. - Ethan Gach Read MorePrevious SlideNext SlideList slidesThere’s Something Very Suspicious About These New Pokémon PlushiesImage: The Pokémon Company / KotakuRefreshing the Pokémon Center to see what new items have been added each day has become something of an obsession for me. The site adds new stock so incredibly frequently as to be constantly astonishing, and today is no different. The latest arrivals on the store are a new collection of plushies that feature Ditto in 22 new disguises. And they are adorable. - John Walker Read MorePrevious SlideNext SlideList slidesFortnite In Legal Trouble After Adding AI Darth VaderScreenshot: Epic / Lucasfilm / KotakuSAG-AFTRA, the massive actors and media union with over 160,000 members, has filed an unfair labor practice charge with the National Labor Relations Board against Epic Games over its inclusion of an AI-powered Darth Vader in a recent Fortnite update. - Zack Zwiezen Read MorePrevious SlideNext SlideList slidesCyberpunk 2077’s Sequel Will Return To Night City, As Well As Take Us Somewhere NewImage: CD Projekt REDWe still don’t know much about the Cyberpunk 2077 sequel currently in the works at CD Projekt Red. Development on the RPG, code-named “Orion,” is in full swing after the studio wrapped support for the original game last year, but the team is still keeping most details about it under wraps, other than a few informal quotes here and there about the vibe it’s trying to capture. However, Mike Pondsmith, the creator of the Cyberpunk tabletop roleplaying game, which first debuted in 1988, has revealed a pretty important piece of information: Alongside returning to the capitalist hellscape of Night City, the sequel will take us to another city as well. - Kenneth Shepard Read MorePrevious SlideNext SlideList slidesClair Obscur: Expedition 33 Promises Fresh Round Of Collector’s Editions As Originals Resell For Over $800Image: Sandfall InteractiveRPG fans love their Collector’s Editions, but few guessed just how big or good Clair Obscur: Expedition 33 would turn out to be, including its own developers. The result was that a very limited set of physical releases, including Collector’s Editions exclusive to certain retailers, immediately vanished from store shelves. People are now trying to resell them for as much as $1,500 on eBay, but fortunately developer Sandfall Interactive has just announced it’s making more. - Ethan Gach Read MorePrevious SlideNext SlideList slidesGameStop Is Selling A Ton Of Big Games For Just $15Image: GameStop / KotakuGameStop must be trying to clear out some space, because the national video game retailer is selling a huge assortment of AAA games, remakes, and recent releases for $15 and $30 as part of a new sale. So why not take advantage of this corporate clean-up and grab some big games for less than half the normal price? - Zack Zwiezen Read More
    0 Comments 0 Shares
  • Math puzzle: The conundrum of sharing

    This month, we visit a trendyspa with an unusual feature: hot mud beds.
    You lay a plastic sheet on the mud. Then you lay your body upon the sheet. Without any direct contact between mud and body, you spend several minutes enjoying the soft and saunalike heat, sweating all over the plastic. Even though the spa session doesn’t last long, it is said to be wonderfully restorative.
    One day, three friends arrive. Unfortunately, only two plastic sheets are available. No one wants to miss out; then again, no one wants to lie on someone else’s sweat.
    “Wait!” says one. “It’s simple! I’ll use one side of the sheet, and you can use the other.”
    “Are you kidding?” another replies. “That side will be covered in mud.”
    The first friend smiles. “Not if we plan ahead.”

    #1: How can all three friends partake in the spa using just two sheets?
    #2: The next day, five friends visit the spa, and only three sheets are available. Can they all partake?#3: Soon, 10 friends visit the spa. Only five sheets are available. “Someone will have to miss out,” one of them declares. “There’s no way to know that,” says another, “until we at least look for a solution.” Who’s right?
    #4: Later, the spa introduces a second kind of mud, which must not be mixed with the first. If three friends want to try both muds, how many sheets do they need at minimum?#5: Lurking here is a fully general question, one that mathematical researchers have yet to solve: What’s the minimum number of sheets that allows N friends to experience M kinds of mud if each side of a sheet may touch only a single person or a single kind of mud?While trying these puzzles, I recommend grabbing some index cards or sheets of paper to manipulate. Or if you’re feeling ambitious, grab some plastic sheets, some sweaty friends and a convenient mud patch.
    Looking for answers? Go to sciencenews.org/puzzle-answers. We’d love to hear your thoughts. Email us at puzzles@sciencenews.org.
    #math #puzzle #conundrum #sharing
    Math puzzle: The conundrum of sharing
    This month, we visit a trendyspa with an unusual feature: hot mud beds. You lay a plastic sheet on the mud. Then you lay your body upon the sheet. Without any direct contact between mud and body, you spend several minutes enjoying the soft and saunalike heat, sweating all over the plastic. Even though the spa session doesn’t last long, it is said to be wonderfully restorative. One day, three friends arrive. Unfortunately, only two plastic sheets are available. No one wants to miss out; then again, no one wants to lie on someone else’s sweat. “Wait!” says one. “It’s simple! I’ll use one side of the sheet, and you can use the other.” “Are you kidding?” another replies. “That side will be covered in mud.” The first friend smiles. “Not if we plan ahead.” #1: How can all three friends partake in the spa using just two sheets? #2: The next day, five friends visit the spa, and only three sheets are available. Can they all partake?#3: Soon, 10 friends visit the spa. Only five sheets are available. “Someone will have to miss out,” one of them declares. “There’s no way to know that,” says another, “until we at least look for a solution.” Who’s right? #4: Later, the spa introduces a second kind of mud, which must not be mixed with the first. If three friends want to try both muds, how many sheets do they need at minimum?#5: Lurking here is a fully general question, one that mathematical researchers have yet to solve: What’s the minimum number of sheets that allows N friends to experience M kinds of mud if each side of a sheet may touch only a single person or a single kind of mud?While trying these puzzles, I recommend grabbing some index cards or sheets of paper to manipulate. Or if you’re feeling ambitious, grab some plastic sheets, some sweaty friends and a convenient mud patch. Looking for answers? Go to sciencenews.org/puzzle-answers. We’d love to hear your thoughts. Email us at puzzles@sciencenews.org. #math #puzzle #conundrum #sharing
    WWW.SCIENCENEWS.ORG
    Math puzzle: The conundrum of sharing
    This month, we visit a trendy (but fictional) spa with an unusual feature: hot mud beds. You lay a plastic sheet on the mud. Then you lay your body upon the sheet. Without any direct contact between mud and body, you spend several minutes enjoying the soft and saunalike heat, sweating all over the plastic. Even though the spa session doesn’t last long, it is said to be wonderfully restorative. One day, three friends arrive. Unfortunately, only two plastic sheets are available. No one wants to miss out; then again, no one wants to lie on someone else’s sweat. “Wait!” says one. “It’s simple! I’ll use one side of the sheet, and you can use the other.” “Are you kidding?” another replies. “That side will be covered in mud.” The first friend smiles. “Not if we plan ahead.” #1: How can all three friends partake in the spa using just two sheets? #2: The next day, five friends visit the spa, and only three sheets are available. Can they all partake? (Let’s assume the spa now forbids laying an already-sweaty side of a sheet directly on their precious mud.) #3: Soon, 10 friends visit the spa. Only five sheets are available. “Someone will have to miss out,” one of them declares. “There’s no way to know that,” says another, “until we at least look for a solution.” Who’s right? #4: Later, the spa introduces a second kind of mud, which must not be mixed with the first. If three friends want to try both muds, how many sheets do they need at minimum? (Let’s assume each person is begrudgingly willing to lie twice on the same sheet.) #5: Lurking here is a fully general question, one that mathematical researchers have yet to solve: What’s the minimum number of sheets that allows N friends to experience M kinds of mud if each side of a sheet may touch only a single person or a single kind of mud? (You might begin by assuming M = 1.) While trying these puzzles, I recommend grabbing some index cards or sheets of paper to manipulate. Or if you’re feeling ambitious, grab some plastic sheets, some sweaty friends and a convenient mud patch. Looking for answers? Go to sciencenews.org/puzzle-answers. We’d love to hear your thoughts. Email us at puzzles@sciencenews.org.
    0 Comments 0 Shares
  • Rocket Report: SpaceX’s expansion at Vandenberg; India’s PSLV fails in flight

    Observation

    Rocket Report: SpaceX’s expansion at Vandenberg; India’s PSLV fails in flight

    China's diversity in rockets was evident this week, with four types of launchers in action.

    Stephen Clark



    May 23, 2025 7:00 am

    |

    7

    Dawn Aerospace's Mk-II Aurora airplane in flight over New Zealand last year.

    Credit:

    Dawn Aerospace

    Dawn Aerospace's Mk-II Aurora airplane in flight over New Zealand last year.

    Credit:

    Dawn Aerospace

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    Welcome to Edition 7.45 of the Rocket Report! Let's talk about spaceplanes. Since the Space Shuttle, spaceplanes have, at best, been a niche part of the space transportation business. The US Air Force's uncrewed X-37B and a similar vehicle operated by China's military are the only spaceplanes to reach orbit since the last shuttle flight in 2011, and both require a lift from a conventional rocket. Virgin Galactic's suborbital space tourism platform is also a spaceplane of sorts. A generation or two ago, one of the chief arguments in favor of spaceplanes was that they were easier to recover and reuse. Today, SpaceX routinely reuses capsules and rockets that look much more like conventional space vehicles than the winged designs of yesteryear. Spaceplanes are undeniably alluring in appearance, but they have the drawback of carrying extra weightinto space that won't be used until the final minutes of a mission. So, do they have a future?
    As always, we welcome reader submissions. If you don't want to miss an issue, please subscribe using the box below. Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

    One of China's commercial rockets returns to flight. The Kinetica-1 rocket launched Wednesday for the first time since a failure doomed its previous attempt to reach orbit in December, according to the vehicle's developer and operator, CAS Space. The Kinetica-1 is one of several small Chinese solid-fueled launch vehicles managed by a commercial company, although with strict government oversight and support. CAS Space, a spinoff of the Chinese Academy of Sciences, said its Kinetica-1 rocket deployed multiple payloads with "excellent orbit insertion accuracy." This was the seventh flight of a Kinetica-1 rocket since its debut in 2022.

    Back in action ... "Kinetica-1 is back!" CAS Space posted on X. "Mission Y7 has just successfully sent six satellites into designated orbits, making a total of 63 satellites or 6 tons of payloads since its debut. Lots of missions are planned for the coming months. 2025 is going to be awesome." The Kinetica-1 is designed to place up to 2 metric tons of payload into low-Earth orbit. A larger liquid-fueled rocket, Kinetica-2, is scheduled to debut later this year.

    The Ars Technica Rocket Report

    The easiest way to keep up with Eric Berger's and Stephen Clark's reporting on all things space is to sign up for our newsletter. We'll collect their stories and deliver them straight to your inbox.
    Sign Me
    Up!

    French government backs a spaceplane startup. French spaceplane startup AndroMach announced May 15 that it received a contract from CNES, the French space agency, to begin testing an early prototype of its Banger v1 rocket engine, European Spaceflight reports. Founded in 2023, AndroMach is developing a pair of spaceplanes that will be used to perform suborbital and orbital missions to space. A suborbital spaceplane will utilize turbojet engines for horizontal takeoff and landing, and a pressure-fed biopropane/liquid oxygen rocket engine to reach space. Test flights of this smaller vehicle will begin in early 2027.
    A risky proposition ... A larger ÉTOILE "orbital shuttle" is designed to be launched by various small launch vehicles and will be capable of carrying payloads of up to 100 kilograms. According to the company, initial test flights of ÉTOILE are expected to begin at the beginning of the next decade. It's unclear how much CNES is committing to AndroMach through this contract, but the company says the funding will support testing of an early demonstrator for its propane-fueled engine, with a focus on evaluating its thermodynamic performance. It's good to see European governments supporting developments in commercial space, but the path to a small commercial orbital spaceplane is rife with risk.Dawn Aerospace is taking orders. Another spaceplane company in a more advanced stage of development says it is now taking customer orders for flights to the edge of space. New Zealand-based Dawn Aerospace said it is beginning to take orders for its remotely piloted, rocket-powered suborbital spaceplane, known as Aurora, with first deliveries expected in 2027, Aviation Week & Space Technology reports. "This marks a historic milestone: the first time a space-capable vehicle—designed to fly beyond the Kármán line—has been offered for direct sale to customers," Dawn Aerospace said in a statement. While it hasn't yet reached space, Dawn's Aurora spaceplane flew to supersonic speed for the first time last year and climbed to an altitude of 82,500 feet, setting a record for the fastest climb from a runway to 20 kilometers.

    Further along ... Aurora is small in stature, measuring just 15.7 feetlong. It's designed to loft a payload of up to 22 poundsabove the Kármán line for up to three minutes of microgravity, before returning to a runway landing. Eventually, Dawn wants to reduce the turnaround time between Aurora flights to less than four hours. "Aurora is set to become the fastest and highest-flying aircraft ever to take off from a conventional runway, blending the extreme performance of rocket propulsion with the reusability and operational simplicity of traditional aviation," Dawn said. The company's business model is akin to commercial airlines, where operators can purchase an aircraft directly from a manufacturer and manage their own operations.India's workhorse rocket falls short of orbit. In a rare setback, Indian Space Research Organisation'slaunch vehicle PSLV-C61 malfunctioned and failed to place a surveillance satellite into the intended orbit last weekend, the Times of India reported. The Polar Satellite Launch Vehicle lifted off from a launch pad on the southeastern coast of India early Sunday, local time, with a radar reconnaissance satellite named EOS-09, or RISAT-1B. The satellite was likely intended to gather intelligence for the Indian military. "The country's military space capabilities, already hindered by developmental challenges, have suffered another setback with the loss of a potential strategic asset," the Times of India wrote.
    What happened? ... V. Narayanan, ISRO's chairman, later said that the rocket’s performance was normal until the third stage. The PSLV's third stage, powered by a solid rocket motor, suffered a "fall in chamber pressure" and the mission could not be accomplished, Narayanan said. Investigators are probing the root cause of the failure. Telemetry data indicated the rocket deviated from its planned flight path around six minutes after launch, when it was traveling more than 12,600 mph, well short of the speed it needed to reach orbital velocity. The rocket and its payload fell into the Indian Ocean south of the launch site. This was the first PSLV launch failure in eight years, ending a streak of 21 consecutive successful flights. SES makes a booking with Impulse Space. SES, owner of the world's largest fleet of geostationary satellites, plans to use Impulse Space’s Helios kick stage to take advantage of lower-cost, low-Earth-orbitlaunch vehicles and get its satellites quickly into higher orbits, Aviation Week & Space Technology reports. SES hopes the combination will break a traditional launch conundrum for operators of medium-Earth-orbitand geostationary orbit. These operators often must make a trade-off between a lower-cost launch that puts them farther from their satellite's final orbit, or a more expensive launch that can expedite their satellite's entry into service.
    A matter of hours ... On Thursday, SES and Impulse Space announced a multi-launch agreement to use the methane-fueled Helios kick stage. "The first mission, currently planned for 2027, will feature a dedicated deployment from a medium-lift launcher in LEO, followed by Helios transferring the 4-ton-class payload directly to GEO within eight hours of launch," Impulse said in a statement. Typically, this transit to GEO takes several weeks to several months, depending on the satellite's propulsion system. "Today, we’re not only partnering with Impulse to bring our satellites faster to orbit, but this will also allow us to extend their lifetime and accelerate service delivery to our customers," said Adel Al-Saleh, CEO of SES. "We're proud to become Helios' first dedicated commercial mission."
    Unpacking China's spaceflight patches. There's a fascinating set of new patches Chinese officials released for a series of launches with top-secret satellites over the last two months, Ars reports. These four patches depict Buddhist gods with a sense of artistry and sharp colors that stand apart from China's previous spaceflight emblems, and perhaps—or perhaps not—they can tell us something about the nature of the missions they represent. The missions launched so-called TJS satellites toward geostationary orbit, where they most likely will perform missions in surveillance, signals intelligence, or missile warning. 
    Making connections ... It's not difficult to start making connections between the Four Heavenly Gods and the missions that China's TJS satellites likely carry out in space. A protector with an umbrella? An all-seeing entity? This sounds like a possible link to spy craft or missile warning, but there's a chance Chinese officials approved the patches to misdirect outside observers, or there's no connection at all.

    China aims for an asteroid. China is set to launch its second Tianwen deep space exploration mission late May, targeting both a near-Earth asteroid and a main belt comet, Space News reports. The robotic Tianwen-2 spacecraft is being integrated with a Long March 3B rocket at the Xichang Satellite Launch Center in southwest China, the country's top state-owned aerospace contractor said. Airspace closure notices indicate a four-hour-long launch window opening at noon EDTon May 28. Backup launch windows are scheduled for May 29 and 30.
    New frontiers ... Tianwen-2's first goal is to collect samples from a near-Earth asteroid designated 469219 Kamoʻoalewa, or 2016 HO3, and return them to Earth in late 2027 with a reentry module. The Tianwen-2 mothership will then set a course toward a comet for a secondary mission. This will be China's first sample return mission from beyond the Moon. The asteroid selected as the target for Tianwen-2 is believed by scientists to be less than 100 meters, or 330 feet, in diameter, and may be made of material thrown off the Moon some time in its ancient past. Results from Tianwen-2 may confirm that hypothesis.Upgraded methalox rocket flies from Jiuquan. Another one of China's privately funded launch companies achieved a milestone this week. Landspace launched an upgraded version of its Zhuque-2E rocket Saturday from the Jiuquan launch base in northwestern China, Space News reports. The rocket delivered six satellites to orbit for a range of remote sensing, Earth observation, and technology demonstration missions. The Zhuque-2E is an improved version of the Zhuque-2, which became the first liquid methane-fueled rocket in the world to reach orbit in 2023.
    Larger envelope ... This was the second flight of the Zhuque-2E rocket design, but the first to utilize a wider payload fairing to provide more volume for satellites on their ride into space. The Zhuque-2E is a stepping stone toward a much larger rocket Landspace is developing called the Zhuque-3, a stainless steel launcher with a reusable first stage booster that, at least outwardly, bears some similarities to SpaceX's Falcon 9.FAA clears SpaceX for Starship Flight 9. The Federal Aviation Administration gave the green light Thursday for SpaceX to launch the next test flight of its Starship mega-rocket as soon as next week, following two consecutive failures earlier this year, Ars reports. The failures set back SpaceX's Starship program by several months. The company aims to get the rocket's development back on track with the upcoming launch, Starship's ninth full-scale test flight since its debut in April 2023. Starship is central to SpaceX's long-held ambition to send humans to Mars and is the vehicle NASA has selected to land astronauts on the Moon under the umbrella of the government's Artemis program.
    Targeting Tuesday, for now ... In a statement Thursday, the FAA said SpaceX is authorized to launch the next Starship test flight, known as Flight 9, after finding the company "meets all of the rigorous safety, environmental and other licensing requirements." SpaceX has not confirmed a target launch date for the next launch of Starship, but warning notices for pilots and mariners to steer clear of hazard areas in the Gulf of Mexico suggest the flight might happen as soon as the evening of Tuesday, May 27. The rocket will lift off from Starbase, Texas, SpaceX's privately owned spaceport near the US-Mexico border. The FAA's approval comes with some stipulations, including that the launch must occur during "non-peak" times for air traffic and a larger closure of airspace downrange from Starbase.
    Space Force is fed up with Vulcan delays. In recent written testimony to a US House of Representatives subcommittee that oversees the military, the senior official responsible for purchasing launches for national security missions blistered one of the country's two primary rocket providers, Ars reports. The remarks from Major General Stephen G. Purdy, acting assistant secretary of the Air Force for Space Acquisition and Integration, concerned United Launch Alliance and its long-delayed development of the large Vulcan rocket. "The ULA Vulcan program has performed unsatisfactorily this past year," Purdy said in written testimony during a May 14 hearing before the House Armed Services Committee's Subcommittee on Strategic Forces. This portion of his testimony did not come up during the hearing, and it has not been reported publicly to date.

    Repairing trust ... "Major issues with the Vulcan have overshadowed its successful certification resulting in delays to the launch of four national security missions," Purdy wrote. "Despite the retirement of highly successful Atlas and Delta launch vehicles, the transition to Vulcan has been slow and continues to impact the completion of Space Force mission objectives." It has widely been known in the space community that military officials, who supported Vulcan with development contracts for the rocket and its engines that exceeded billion, have been unhappy with the pace of the rocket's development. It was originally due to launch in 2020. At the end of his written testimony, Purdy emphasized that he expected ULA to do better. As part of his job as the Service Acquisition Executive for Space, Purdy noted that he has been tasked to transform space acquisition and to become more innovative. "For these programs, the prime contractors must re-establish baselines, establish a culture of accountability, and repair trust deficit to prove to the SAE that they are adopting the acquisition principles necessary to deliver capabilities at speed, on cost and on schedule."
    SpaceX's growth on the West Coast. SpaceX is moving ahead with expansion plans at Vandenberg Space Force Base, California, that will double its West Coast launch cadence and enable Falcon Heavy rockets to fly from California, Spaceflight Now reports. Last week, the Department of the Air Force issued its Draft Environmental Impact Statement, which considers proposed modifications from SpaceX to Space Launch Complex 6at Vandenberg. These modifications will include changes to support launches of Falcon 9 and Falcon Heavy rockets, the construction of two new landing pads for Falcon boosters adjacent to SLC-6, the demolition of unneeded structures at SLC-6, and increasing SpaceX’s permitted launch cadence from Vandenberg from 50 launches to 100.

    Doubling the fun ... The transformation of SLC-6 would include quite a bit of overhaul. Its most recent tenant, United Launch Alliance, previously used it for Delta IV rockets from 2006 through its final launch in September 2022. The following year, the Space Force handed over the launch pad to SpaceX, which lacked a pad at Vandenberg capable of supporting Falcon Heavy missions. The estimated launch cadence between SpaceX’s existing Falcon 9 pad at Vandenberg, known as SLC-4E, and SLC-6 would be a 70-11 split for Falcon 9 rockets in 2026, with one Falcon Heavy at SLC-6, for a total of 82 launches. That would increase to a 70-25 Falcon 9 split in 2027 and 2028, with an estimated five Falcon Heavy launches in each of those years.Next three launches
    May 23: Falcon 9 | Starlink 11-16 | Vandenberg Space Force Base, California | 20:36 UTC
    May 24: Falcon 9 | Starlink 12-22 | Cape Canaveral Space Force Station, Florida | 17:19 UTC
    May 27: Falcon 9 | Starlink 17-1 | Vandenberg Space Force Base, California | 16:14 UTC

    Stephen Clark
    Space Reporter

    Stephen Clark
    Space Reporter

    Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

    7 Comments
    #rocket #report #spacexs #expansion #vandenberg
    Rocket Report: SpaceX’s expansion at Vandenberg; India’s PSLV fails in flight
    Observation Rocket Report: SpaceX’s expansion at Vandenberg; India’s PSLV fails in flight China's diversity in rockets was evident this week, with four types of launchers in action. Stephen Clark – May 23, 2025 7:00 am | 7 Dawn Aerospace's Mk-II Aurora airplane in flight over New Zealand last year. Credit: Dawn Aerospace Dawn Aerospace's Mk-II Aurora airplane in flight over New Zealand last year. Credit: Dawn Aerospace Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Welcome to Edition 7.45 of the Rocket Report! Let's talk about spaceplanes. Since the Space Shuttle, spaceplanes have, at best, been a niche part of the space transportation business. The US Air Force's uncrewed X-37B and a similar vehicle operated by China's military are the only spaceplanes to reach orbit since the last shuttle flight in 2011, and both require a lift from a conventional rocket. Virgin Galactic's suborbital space tourism platform is also a spaceplane of sorts. A generation or two ago, one of the chief arguments in favor of spaceplanes was that they were easier to recover and reuse. Today, SpaceX routinely reuses capsules and rockets that look much more like conventional space vehicles than the winged designs of yesteryear. Spaceplanes are undeniably alluring in appearance, but they have the drawback of carrying extra weightinto space that won't be used until the final minutes of a mission. So, do they have a future? As always, we welcome reader submissions. If you don't want to miss an issue, please subscribe using the box below. Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar. One of China's commercial rockets returns to flight. The Kinetica-1 rocket launched Wednesday for the first time since a failure doomed its previous attempt to reach orbit in December, according to the vehicle's developer and operator, CAS Space. The Kinetica-1 is one of several small Chinese solid-fueled launch vehicles managed by a commercial company, although with strict government oversight and support. CAS Space, a spinoff of the Chinese Academy of Sciences, said its Kinetica-1 rocket deployed multiple payloads with "excellent orbit insertion accuracy." This was the seventh flight of a Kinetica-1 rocket since its debut in 2022. Back in action ... "Kinetica-1 is back!" CAS Space posted on X. "Mission Y7 has just successfully sent six satellites into designated orbits, making a total of 63 satellites or 6 tons of payloads since its debut. Lots of missions are planned for the coming months. 2025 is going to be awesome." The Kinetica-1 is designed to place up to 2 metric tons of payload into low-Earth orbit. A larger liquid-fueled rocket, Kinetica-2, is scheduled to debut later this year. The Ars Technica Rocket Report The easiest way to keep up with Eric Berger's and Stephen Clark's reporting on all things space is to sign up for our newsletter. We'll collect their stories and deliver them straight to your inbox. Sign Me Up! French government backs a spaceplane startup. French spaceplane startup AndroMach announced May 15 that it received a contract from CNES, the French space agency, to begin testing an early prototype of its Banger v1 rocket engine, European Spaceflight reports. Founded in 2023, AndroMach is developing a pair of spaceplanes that will be used to perform suborbital and orbital missions to space. A suborbital spaceplane will utilize turbojet engines for horizontal takeoff and landing, and a pressure-fed biopropane/liquid oxygen rocket engine to reach space. Test flights of this smaller vehicle will begin in early 2027. A risky proposition ... A larger ÉTOILE "orbital shuttle" is designed to be launched by various small launch vehicles and will be capable of carrying payloads of up to 100 kilograms. According to the company, initial test flights of ÉTOILE are expected to begin at the beginning of the next decade. It's unclear how much CNES is committing to AndroMach through this contract, but the company says the funding will support testing of an early demonstrator for its propane-fueled engine, with a focus on evaluating its thermodynamic performance. It's good to see European governments supporting developments in commercial space, but the path to a small commercial orbital spaceplane is rife with risk.Dawn Aerospace is taking orders. Another spaceplane company in a more advanced stage of development says it is now taking customer orders for flights to the edge of space. New Zealand-based Dawn Aerospace said it is beginning to take orders for its remotely piloted, rocket-powered suborbital spaceplane, known as Aurora, with first deliveries expected in 2027, Aviation Week & Space Technology reports. "This marks a historic milestone: the first time a space-capable vehicle—designed to fly beyond the Kármán line—has been offered for direct sale to customers," Dawn Aerospace said in a statement. While it hasn't yet reached space, Dawn's Aurora spaceplane flew to supersonic speed for the first time last year and climbed to an altitude of 82,500 feet, setting a record for the fastest climb from a runway to 20 kilometers. Further along ... Aurora is small in stature, measuring just 15.7 feetlong. It's designed to loft a payload of up to 22 poundsabove the Kármán line for up to three minutes of microgravity, before returning to a runway landing. Eventually, Dawn wants to reduce the turnaround time between Aurora flights to less than four hours. "Aurora is set to become the fastest and highest-flying aircraft ever to take off from a conventional runway, blending the extreme performance of rocket propulsion with the reusability and operational simplicity of traditional aviation," Dawn said. The company's business model is akin to commercial airlines, where operators can purchase an aircraft directly from a manufacturer and manage their own operations.India's workhorse rocket falls short of orbit. In a rare setback, Indian Space Research Organisation'slaunch vehicle PSLV-C61 malfunctioned and failed to place a surveillance satellite into the intended orbit last weekend, the Times of India reported. The Polar Satellite Launch Vehicle lifted off from a launch pad on the southeastern coast of India early Sunday, local time, with a radar reconnaissance satellite named EOS-09, or RISAT-1B. The satellite was likely intended to gather intelligence for the Indian military. "The country's military space capabilities, already hindered by developmental challenges, have suffered another setback with the loss of a potential strategic asset," the Times of India wrote. What happened? ... V. Narayanan, ISRO's chairman, later said that the rocket’s performance was normal until the third stage. The PSLV's third stage, powered by a solid rocket motor, suffered a "fall in chamber pressure" and the mission could not be accomplished, Narayanan said. Investigators are probing the root cause of the failure. Telemetry data indicated the rocket deviated from its planned flight path around six minutes after launch, when it was traveling more than 12,600 mph, well short of the speed it needed to reach orbital velocity. The rocket and its payload fell into the Indian Ocean south of the launch site. This was the first PSLV launch failure in eight years, ending a streak of 21 consecutive successful flights. SES makes a booking with Impulse Space. SES, owner of the world's largest fleet of geostationary satellites, plans to use Impulse Space’s Helios kick stage to take advantage of lower-cost, low-Earth-orbitlaunch vehicles and get its satellites quickly into higher orbits, Aviation Week & Space Technology reports. SES hopes the combination will break a traditional launch conundrum for operators of medium-Earth-orbitand geostationary orbit. These operators often must make a trade-off between a lower-cost launch that puts them farther from their satellite's final orbit, or a more expensive launch that can expedite their satellite's entry into service. A matter of hours ... On Thursday, SES and Impulse Space announced a multi-launch agreement to use the methane-fueled Helios kick stage. "The first mission, currently planned for 2027, will feature a dedicated deployment from a medium-lift launcher in LEO, followed by Helios transferring the 4-ton-class payload directly to GEO within eight hours of launch," Impulse said in a statement. Typically, this transit to GEO takes several weeks to several months, depending on the satellite's propulsion system. "Today, we’re not only partnering with Impulse to bring our satellites faster to orbit, but this will also allow us to extend their lifetime and accelerate service delivery to our customers," said Adel Al-Saleh, CEO of SES. "We're proud to become Helios' first dedicated commercial mission." Unpacking China's spaceflight patches. There's a fascinating set of new patches Chinese officials released for a series of launches with top-secret satellites over the last two months, Ars reports. These four patches depict Buddhist gods with a sense of artistry and sharp colors that stand apart from China's previous spaceflight emblems, and perhaps—or perhaps not—they can tell us something about the nature of the missions they represent. The missions launched so-called TJS satellites toward geostationary orbit, where they most likely will perform missions in surveillance, signals intelligence, or missile warning.  Making connections ... It's not difficult to start making connections between the Four Heavenly Gods and the missions that China's TJS satellites likely carry out in space. A protector with an umbrella? An all-seeing entity? This sounds like a possible link to spy craft or missile warning, but there's a chance Chinese officials approved the patches to misdirect outside observers, or there's no connection at all. China aims for an asteroid. China is set to launch its second Tianwen deep space exploration mission late May, targeting both a near-Earth asteroid and a main belt comet, Space News reports. The robotic Tianwen-2 spacecraft is being integrated with a Long March 3B rocket at the Xichang Satellite Launch Center in southwest China, the country's top state-owned aerospace contractor said. Airspace closure notices indicate a four-hour-long launch window opening at noon EDTon May 28. Backup launch windows are scheduled for May 29 and 30. New frontiers ... Tianwen-2's first goal is to collect samples from a near-Earth asteroid designated 469219 Kamoʻoalewa, or 2016 HO3, and return them to Earth in late 2027 with a reentry module. The Tianwen-2 mothership will then set a course toward a comet for a secondary mission. This will be China's first sample return mission from beyond the Moon. The asteroid selected as the target for Tianwen-2 is believed by scientists to be less than 100 meters, or 330 feet, in diameter, and may be made of material thrown off the Moon some time in its ancient past. Results from Tianwen-2 may confirm that hypothesis.Upgraded methalox rocket flies from Jiuquan. Another one of China's privately funded launch companies achieved a milestone this week. Landspace launched an upgraded version of its Zhuque-2E rocket Saturday from the Jiuquan launch base in northwestern China, Space News reports. The rocket delivered six satellites to orbit for a range of remote sensing, Earth observation, and technology demonstration missions. The Zhuque-2E is an improved version of the Zhuque-2, which became the first liquid methane-fueled rocket in the world to reach orbit in 2023. Larger envelope ... This was the second flight of the Zhuque-2E rocket design, but the first to utilize a wider payload fairing to provide more volume for satellites on their ride into space. The Zhuque-2E is a stepping stone toward a much larger rocket Landspace is developing called the Zhuque-3, a stainless steel launcher with a reusable first stage booster that, at least outwardly, bears some similarities to SpaceX's Falcon 9.FAA clears SpaceX for Starship Flight 9. The Federal Aviation Administration gave the green light Thursday for SpaceX to launch the next test flight of its Starship mega-rocket as soon as next week, following two consecutive failures earlier this year, Ars reports. The failures set back SpaceX's Starship program by several months. The company aims to get the rocket's development back on track with the upcoming launch, Starship's ninth full-scale test flight since its debut in April 2023. Starship is central to SpaceX's long-held ambition to send humans to Mars and is the vehicle NASA has selected to land astronauts on the Moon under the umbrella of the government's Artemis program. Targeting Tuesday, for now ... In a statement Thursday, the FAA said SpaceX is authorized to launch the next Starship test flight, known as Flight 9, after finding the company "meets all of the rigorous safety, environmental and other licensing requirements." SpaceX has not confirmed a target launch date for the next launch of Starship, but warning notices for pilots and mariners to steer clear of hazard areas in the Gulf of Mexico suggest the flight might happen as soon as the evening of Tuesday, May 27. The rocket will lift off from Starbase, Texas, SpaceX's privately owned spaceport near the US-Mexico border. The FAA's approval comes with some stipulations, including that the launch must occur during "non-peak" times for air traffic and a larger closure of airspace downrange from Starbase. Space Force is fed up with Vulcan delays. In recent written testimony to a US House of Representatives subcommittee that oversees the military, the senior official responsible for purchasing launches for national security missions blistered one of the country's two primary rocket providers, Ars reports. The remarks from Major General Stephen G. Purdy, acting assistant secretary of the Air Force for Space Acquisition and Integration, concerned United Launch Alliance and its long-delayed development of the large Vulcan rocket. "The ULA Vulcan program has performed unsatisfactorily this past year," Purdy said in written testimony during a May 14 hearing before the House Armed Services Committee's Subcommittee on Strategic Forces. This portion of his testimony did not come up during the hearing, and it has not been reported publicly to date. Repairing trust ... "Major issues with the Vulcan have overshadowed its successful certification resulting in delays to the launch of four national security missions," Purdy wrote. "Despite the retirement of highly successful Atlas and Delta launch vehicles, the transition to Vulcan has been slow and continues to impact the completion of Space Force mission objectives." It has widely been known in the space community that military officials, who supported Vulcan with development contracts for the rocket and its engines that exceeded billion, have been unhappy with the pace of the rocket's development. It was originally due to launch in 2020. At the end of his written testimony, Purdy emphasized that he expected ULA to do better. As part of his job as the Service Acquisition Executive for Space, Purdy noted that he has been tasked to transform space acquisition and to become more innovative. "For these programs, the prime contractors must re-establish baselines, establish a culture of accountability, and repair trust deficit to prove to the SAE that they are adopting the acquisition principles necessary to deliver capabilities at speed, on cost and on schedule." SpaceX's growth on the West Coast. SpaceX is moving ahead with expansion plans at Vandenberg Space Force Base, California, that will double its West Coast launch cadence and enable Falcon Heavy rockets to fly from California, Spaceflight Now reports. Last week, the Department of the Air Force issued its Draft Environmental Impact Statement, which considers proposed modifications from SpaceX to Space Launch Complex 6at Vandenberg. These modifications will include changes to support launches of Falcon 9 and Falcon Heavy rockets, the construction of two new landing pads for Falcon boosters adjacent to SLC-6, the demolition of unneeded structures at SLC-6, and increasing SpaceX’s permitted launch cadence from Vandenberg from 50 launches to 100. Doubling the fun ... The transformation of SLC-6 would include quite a bit of overhaul. Its most recent tenant, United Launch Alliance, previously used it for Delta IV rockets from 2006 through its final launch in September 2022. The following year, the Space Force handed over the launch pad to SpaceX, which lacked a pad at Vandenberg capable of supporting Falcon Heavy missions. The estimated launch cadence between SpaceX’s existing Falcon 9 pad at Vandenberg, known as SLC-4E, and SLC-6 would be a 70-11 split for Falcon 9 rockets in 2026, with one Falcon Heavy at SLC-6, for a total of 82 launches. That would increase to a 70-25 Falcon 9 split in 2027 and 2028, with an estimated five Falcon Heavy launches in each of those years.Next three launches May 23: Falcon 9 | Starlink 11-16 | Vandenberg Space Force Base, California | 20:36 UTC May 24: Falcon 9 | Starlink 12-22 | Cape Canaveral Space Force Station, Florida | 17:19 UTC May 27: Falcon 9 | Starlink 17-1 | Vandenberg Space Force Base, California | 16:14 UTC Stephen Clark Space Reporter Stephen Clark Space Reporter Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet. 7 Comments #rocket #report #spacexs #expansion #vandenberg
    ARSTECHNICA.COM
    Rocket Report: SpaceX’s expansion at Vandenberg; India’s PSLV fails in flight
    Observation Rocket Report: SpaceX’s expansion at Vandenberg; India’s PSLV fails in flight China's diversity in rockets was evident this week, with four types of launchers in action. Stephen Clark – May 23, 2025 7:00 am | 7 Dawn Aerospace's Mk-II Aurora airplane in flight over New Zealand last year. Credit: Dawn Aerospace Dawn Aerospace's Mk-II Aurora airplane in flight over New Zealand last year. Credit: Dawn Aerospace Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Welcome to Edition 7.45 of the Rocket Report! Let's talk about spaceplanes. Since the Space Shuttle, spaceplanes have, at best, been a niche part of the space transportation business. The US Air Force's uncrewed X-37B and a similar vehicle operated by China's military are the only spaceplanes to reach orbit since the last shuttle flight in 2011, and both require a lift from a conventional rocket. Virgin Galactic's suborbital space tourism platform is also a spaceplane of sorts. A generation or two ago, one of the chief arguments in favor of spaceplanes was that they were easier to recover and reuse. Today, SpaceX routinely reuses capsules and rockets that look much more like conventional space vehicles than the winged designs of yesteryear. Spaceplanes are undeniably alluring in appearance, but they have the drawback of carrying extra weight (wings) into space that won't be used until the final minutes of a mission. So, do they have a future? As always, we welcome reader submissions. If you don't want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar. One of China's commercial rockets returns to flight. The Kinetica-1 rocket launched Wednesday for the first time since a failure doomed its previous attempt to reach orbit in December, according to the vehicle's developer and operator, CAS Space. The Kinetica-1 is one of several small Chinese solid-fueled launch vehicles managed by a commercial company, although with strict government oversight and support. CAS Space, a spinoff of the Chinese Academy of Sciences, said its Kinetica-1 rocket deployed multiple payloads with "excellent orbit insertion accuracy." This was the seventh flight of a Kinetica-1 rocket since its debut in 2022. Back in action ... "Kinetica-1 is back!" CAS Space posted on X. "Mission Y7 has just successfully sent six satellites into designated orbits, making a total of 63 satellites or 6 tons of payloads since its debut. Lots of missions are planned for the coming months. 2025 is going to be awesome." The Kinetica-1 is designed to place up to 2 metric tons of payload into low-Earth orbit. A larger liquid-fueled rocket, Kinetica-2, is scheduled to debut later this year. The Ars Technica Rocket Report The easiest way to keep up with Eric Berger's and Stephen Clark's reporting on all things space is to sign up for our newsletter. We'll collect their stories and deliver them straight to your inbox. Sign Me Up! French government backs a spaceplane startup. French spaceplane startup AndroMach announced May 15 that it received a contract from CNES, the French space agency, to begin testing an early prototype of its Banger v1 rocket engine, European Spaceflight reports. Founded in 2023, AndroMach is developing a pair of spaceplanes that will be used to perform suborbital and orbital missions to space. A suborbital spaceplane will utilize turbojet engines for horizontal takeoff and landing, and a pressure-fed biopropane/liquid oxygen rocket engine to reach space. Test flights of this smaller vehicle will begin in early 2027. A risky proposition ... A larger ÉTOILE "orbital shuttle" is designed to be launched by various small launch vehicles and will be capable of carrying payloads of up to 100 kilograms (220 pounds). According to the company, initial test flights of ÉTOILE are expected to begin at the beginning of the next decade. It's unclear how much CNES is committing to AndroMach through this contract, but the company says the funding will support testing of an early demonstrator for its propane-fueled engine, with a focus on evaluating its thermodynamic performance. It's good to see European governments supporting developments in commercial space, but the path to a small commercial orbital spaceplane is rife with risk. (submitted by EllPeaTea) Dawn Aerospace is taking orders. Another spaceplane company in a more advanced stage of development says it is now taking customer orders for flights to the edge of space. New Zealand-based Dawn Aerospace said it is beginning to take orders for its remotely piloted, rocket-powered suborbital spaceplane, known as Aurora, with first deliveries expected in 2027, Aviation Week & Space Technology reports. "This marks a historic milestone: the first time a space-capable vehicle—designed to fly beyond the Kármán line (100 kilometers or 328,000 feet)—has been offered for direct sale to customers," Dawn Aerospace said in a statement. While it hasn't yet reached space, Dawn's Aurora spaceplane flew to supersonic speed for the first time last year and climbed to an altitude of 82,500 feet (25.1 kilometers), setting a record for the fastest climb from a runway to 20 kilometers. Further along ... Aurora is small in stature, measuring just 15.7 feet (4.8 meters) long. It's designed to loft a payload of up to 22 pounds (10 kilograms) above the Kármán line for up to three minutes of microgravity, before returning to a runway landing. Eventually, Dawn wants to reduce the turnaround time between Aurora flights to less than four hours. "Aurora is set to become the fastest and highest-flying aircraft ever to take off from a conventional runway, blending the extreme performance of rocket propulsion with the reusability and operational simplicity of traditional aviation," Dawn said. The company's business model is akin to commercial airlines, where operators can purchase an aircraft directly from a manufacturer and manage their own operations. (submitted by EllPeaTea) India's workhorse rocket falls short of orbit. In a rare setback, Indian Space Research Organisation's (ISRO) launch vehicle PSLV-C61 malfunctioned and failed to place a surveillance satellite into the intended orbit last weekend, the Times of India reported. The Polar Satellite Launch Vehicle lifted off from a launch pad on the southeastern coast of India early Sunday, local time, with a radar reconnaissance satellite named EOS-09, or RISAT-1B. The satellite was likely intended to gather intelligence for the Indian military. "The country's military space capabilities, already hindered by developmental challenges, have suffered another setback with the loss of a potential strategic asset," the Times of India wrote. What happened? ... V. Narayanan, ISRO's chairman, later said that the rocket’s performance was normal until the third stage. The PSLV's third stage, powered by a solid rocket motor, suffered a "fall in chamber pressure" and the mission could not be accomplished, Narayanan said. Investigators are probing the root cause of the failure. Telemetry data indicated the rocket deviated from its planned flight path around six minutes after launch, when it was traveling more than 12,600 mph (5.66 kilometers per second), well short of the speed it needed to reach orbital velocity. The rocket and its payload fell into the Indian Ocean south of the launch site. This was the first PSLV launch failure in eight years, ending a streak of 21 consecutive successful flights. (submitted by EllPeaTea) SES makes a booking with Impulse Space. SES, owner of the world's largest fleet of geostationary satellites, plans to use Impulse Space’s Helios kick stage to take advantage of lower-cost, low-Earth-orbit (LEO) launch vehicles and get its satellites quickly into higher orbits, Aviation Week & Space Technology reports. SES hopes the combination will break a traditional launch conundrum for operators of medium-Earth-orbit (MEO) and geostationary orbit (GEO). These operators often must make a trade-off between a lower-cost launch that puts them farther from their satellite's final orbit, or a more expensive launch that can expedite their satellite's entry into service. A matter of hours ... On Thursday, SES and Impulse Space announced a multi-launch agreement to use the methane-fueled Helios kick stage. "The first mission, currently planned for 2027, will feature a dedicated deployment from a medium-lift launcher in LEO, followed by Helios transferring the 4-ton-class payload directly to GEO within eight hours of launch," Impulse said in a statement. Typically, this transit to GEO takes several weeks to several months, depending on the satellite's propulsion system. "Today, we’re not only partnering with Impulse to bring our satellites faster to orbit, but this will also allow us to extend their lifetime and accelerate service delivery to our customers," said Adel Al-Saleh, CEO of SES. "We're proud to become Helios' first dedicated commercial mission." Unpacking China's spaceflight patches. There's a fascinating set of new patches Chinese officials released for a series of launches with top-secret satellites over the last two months, Ars reports. These four patches depict Buddhist gods with a sense of artistry and sharp colors that stand apart from China's previous spaceflight emblems, and perhaps—or perhaps not—they can tell us something about the nature of the missions they represent. The missions launched so-called TJS satellites toward geostationary orbit, where they most likely will perform missions in surveillance, signals intelligence, or missile warning.  Making connections ... It's not difficult to start making connections between the Four Heavenly Gods and the missions that China's TJS satellites likely carry out in space. A protector with an umbrella? An all-seeing entity? This sounds like a possible link to spy craft or missile warning, but there's a chance Chinese officials approved the patches to misdirect outside observers, or there's no connection at all. China aims for an asteroid. China is set to launch its second Tianwen deep space exploration mission late May, targeting both a near-Earth asteroid and a main belt comet, Space News reports. The robotic Tianwen-2 spacecraft is being integrated with a Long March 3B rocket at the Xichang Satellite Launch Center in southwest China, the country's top state-owned aerospace contractor said. Airspace closure notices indicate a four-hour-long launch window opening at noon EDT (16:00–20:00 UTC) on May 28. Backup launch windows are scheduled for May 29 and 30. New frontiers ... Tianwen-2's first goal is to collect samples from a near-Earth asteroid designated 469219 Kamoʻoalewa, or 2016 HO3, and return them to Earth in late 2027 with a reentry module. The Tianwen-2 mothership will then set a course toward a comet for a secondary mission. This will be China's first sample return mission from beyond the Moon. The asteroid selected as the target for Tianwen-2 is believed by scientists to be less than 100 meters, or 330 feet, in diameter, and may be made of material thrown off the Moon some time in its ancient past. Results from Tianwen-2 may confirm that hypothesis. (submitted by EllPeaTea) Upgraded methalox rocket flies from Jiuquan. Another one of China's privately funded launch companies achieved a milestone this week. Landspace launched an upgraded version of its Zhuque-2E rocket Saturday from the Jiuquan launch base in northwestern China, Space News reports. The rocket delivered six satellites to orbit for a range of remote sensing, Earth observation, and technology demonstration missions. The Zhuque-2E is an improved version of the Zhuque-2, which became the first liquid methane-fueled rocket in the world to reach orbit in 2023. Larger envelope ... This was the second flight of the Zhuque-2E rocket design, but the first to utilize a wider payload fairing to provide more volume for satellites on their ride into space. The Zhuque-2E is a stepping stone toward a much larger rocket Landspace is developing called the Zhuque-3, a stainless steel launcher with a reusable first stage booster that, at least outwardly, bears some similarities to SpaceX's Falcon 9. (submitted by EllPeaTea) FAA clears SpaceX for Starship Flight 9. The Federal Aviation Administration gave the green light Thursday for SpaceX to launch the next test flight of its Starship mega-rocket as soon as next week, following two consecutive failures earlier this year, Ars reports. The failures set back SpaceX's Starship program by several months. The company aims to get the rocket's development back on track with the upcoming launch, Starship's ninth full-scale test flight since its debut in April 2023. Starship is central to SpaceX's long-held ambition to send humans to Mars and is the vehicle NASA has selected to land astronauts on the Moon under the umbrella of the government's Artemis program. Targeting Tuesday, for now ... In a statement Thursday, the FAA said SpaceX is authorized to launch the next Starship test flight, known as Flight 9, after finding the company "meets all of the rigorous safety, environmental and other licensing requirements." SpaceX has not confirmed a target launch date for the next launch of Starship, but warning notices for pilots and mariners to steer clear of hazard areas in the Gulf of Mexico suggest the flight might happen as soon as the evening of Tuesday, May 27. The rocket will lift off from Starbase, Texas, SpaceX's privately owned spaceport near the US-Mexico border. The FAA's approval comes with some stipulations, including that the launch must occur during "non-peak" times for air traffic and a larger closure of airspace downrange from Starbase. Space Force is fed up with Vulcan delays. In recent written testimony to a US House of Representatives subcommittee that oversees the military, the senior official responsible for purchasing launches for national security missions blistered one of the country's two primary rocket providers, Ars reports. The remarks from Major General Stephen G. Purdy, acting assistant secretary of the Air Force for Space Acquisition and Integration, concerned United Launch Alliance and its long-delayed development of the large Vulcan rocket. "The ULA Vulcan program has performed unsatisfactorily this past year," Purdy said in written testimony during a May 14 hearing before the House Armed Services Committee's Subcommittee on Strategic Forces. This portion of his testimony did not come up during the hearing, and it has not been reported publicly to date. Repairing trust ... "Major issues with the Vulcan have overshadowed its successful certification resulting in delays to the launch of four national security missions," Purdy wrote. "Despite the retirement of highly successful Atlas and Delta launch vehicles, the transition to Vulcan has been slow and continues to impact the completion of Space Force mission objectives." It has widely been known in the space community that military officials, who supported Vulcan with development contracts for the rocket and its engines that exceeded $1 billion, have been unhappy with the pace of the rocket's development. It was originally due to launch in 2020. At the end of his written testimony, Purdy emphasized that he expected ULA to do better. As part of his job as the Service Acquisition Executive for Space (SAE), Purdy noted that he has been tasked to transform space acquisition and to become more innovative. "For these programs, the prime contractors must re-establish baselines, establish a culture of accountability, and repair trust deficit to prove to the SAE that they are adopting the acquisition principles necessary to deliver capabilities at speed, on cost and on schedule." SpaceX's growth on the West Coast. SpaceX is moving ahead with expansion plans at Vandenberg Space Force Base, California, that will double its West Coast launch cadence and enable Falcon Heavy rockets to fly from California, Spaceflight Now reports. Last week, the Department of the Air Force issued its Draft Environmental Impact Statement (EIS), which considers proposed modifications from SpaceX to Space Launch Complex 6 (SLC-6) at Vandenberg. These modifications will include changes to support launches of Falcon 9 and Falcon Heavy rockets, the construction of two new landing pads for Falcon boosters adjacent to SLC-6, the demolition of unneeded structures at SLC-6, and increasing SpaceX’s permitted launch cadence from Vandenberg from 50 launches to 100. Doubling the fun ... The transformation of SLC-6 would include quite a bit of overhaul. Its most recent tenant, United Launch Alliance, previously used it for Delta IV rockets from 2006 through its final launch in September 2022. The following year, the Space Force handed over the launch pad to SpaceX, which lacked a pad at Vandenberg capable of supporting Falcon Heavy missions. The estimated launch cadence between SpaceX’s existing Falcon 9 pad at Vandenberg, known as SLC-4E, and SLC-6 would be a 70-11 split for Falcon 9 rockets in 2026, with one Falcon Heavy at SLC-6, for a total of 82 launches. That would increase to a 70-25 Falcon 9 split in 2027 and 2028, with an estimated five Falcon Heavy launches in each of those years. (submitted by EllPeaTea) Next three launches May 23: Falcon 9 | Starlink 11-16 | Vandenberg Space Force Base, California | 20:36 UTC May 24: Falcon 9 | Starlink 12-22 | Cape Canaveral Space Force Station, Florida | 17:19 UTC May 27: Falcon 9 | Starlink 17-1 | Vandenberg Space Force Base, California | 16:14 UTC Stephen Clark Space Reporter Stephen Clark Space Reporter Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet. 7 Comments
    0 Comments 0 Shares
  • I Wish Neil Druckmann Would Stop Confirming Things About The Last Of Us

    You might not know it based on my scathing recaps of The Last of Us’ second season, but I love this series. I love the moral conundrums it presents, the violent grief it depicts, and the games’ excellent writing that poignantly brings all of those complicated emotions to the surface. What I don’t like is listening to pretty much any of the creative team talk about the series, especially when it comes to weighing in on decade-long discourse around its complex storylines. Even when I agree with series director Neil Druckmann’s interpretation of something, we’d all rather he just let bad readings fester in the corners of the internet than tell us exactly what something means. Nevertheless, he continues to do so in interviews.Suggested ReadingThe Week In Games: Pokémon With Guns And More New Releases

    Share SubtitlesOffEnglishview videoSuggested ReadingThe Week In Games: Pokémon With Guns And More New Releases

    Share SubtitlesOffEnglishIn a discussion with the Sacred Symbols podcast, Druckmann talked about the end of The Last of Us Part I, which was adapted for television in the HBO show’s first season. In this climactic moment, Joel—a smuggler turned surrogate father to Ellie, the young girl immune to a fungal infection that has leveled the series’ post-apocalyptic world—massacres members of a revolutionary group called the Fireflies who sought to use Ellie’s immunity to create a vaccine. They could potentially have saved millions of lives and helped society rebuild again after decades of ruin. But after months of traveling across the United States to reach the group’s base in Salt Lake City, Joel wasn’t willing to lose Ellie for something as small as the possibility of a world-saving vaccine. After the player fights their way through the facility and escapes with Ellie, Joel lies to her about what happened, and they live happily ever afterin Jackson, Wyoming…until the sequel, at least.It’s a nuanced situation, and ever since The Last of Us launched in 2013, fans have debated the ethics of pretty much every character in this finale. However, one part of the discussion that has persisted is the question of whether or not the Fireflies would have been able to successfully create a cure or vaccine in the first place. This is the post-apocalypse. They’ve got one surgeon here who claims to be able to do the job, and even if they managed to concoct a vaccine, how would they distribute it? All of that is an interesting logistical discussion, but some fans have taken that talking point a step further and tried to claim the potential success of the plan was ever part of Joel’s motivations. It’s very obvious that the man cares about Ellie’s life above all else, and didn’t stop to weigh up the vagueries of vaccine efficacy in a zombie apocalypse I’ve always read these attempts to explain away Joel’s guilt as cope, and the idea that he had a firm confidence that the Fireflies’ attempts would fail as an effort to wash away the reality of Joel’s actions.Yet now, Druckmann has confirmed that it was always the intention for the group’s medical team to be able to create the cure, essentially nuking that talking point. Am I upset that we can now put this obviously desperate theory to bed? No. Do I wish Druckmann would stop giving definitive answers to a story that has thrived in ambiguity and interpretation? Absolutely.“Our intent was yes, they could,” Druckmann said. “Now, is our science a little shaky that now people are now questioning it? Sure. Our science is a little shaky and people are now questioning it. I can’t say anything. I can say our intent was that they would have made a cure. That makes the most interesting philosophical question for what Joel does.”Sure, it’s the most interesting interpretation because it actually interrogates everything you know about Joel based on how he presents. Giving him an out is just refusing to engage with the text. Do you want to debate if the Fireflies were equipped to save the world with a vaccine? That’s an entirely separate discussion from Joel’s motivations. But even so, we don’t need every detail spelt out. Maybe it’s because Druckmann is being constantly interviewed about this series after two games, more remasters than I care to count, and two seasons of television, but the more we talk to this dudeand ask him to tell us what it all meant, the less interesting the story can be. I don’t want to know if Druckmann thinks Joel was right. I don’t need the author to tell me what I’m supposed to feel. It’s a major reason why the show bothers me so much, because it loves to tell you what lessons you’re supposed to learn, rather than giving you a second to consider what you feel about it.I wish we lived in a world where The Last of Us’ marketing got to be as bold as the games. Just let it speak for itself..
    #wish #neil #druckmann #would #stop
    I Wish Neil Druckmann Would Stop Confirming Things About The Last Of Us
    You might not know it based on my scathing recaps of The Last of Us’ second season, but I love this series. I love the moral conundrums it presents, the violent grief it depicts, and the games’ excellent writing that poignantly brings all of those complicated emotions to the surface. What I don’t like is listening to pretty much any of the creative team talk about the series, especially when it comes to weighing in on decade-long discourse around its complex storylines. Even when I agree with series director Neil Druckmann’s interpretation of something, we’d all rather he just let bad readings fester in the corners of the internet than tell us exactly what something means. Nevertheless, he continues to do so in interviews.Suggested ReadingThe Week In Games: Pokémon With Guns And More New Releases Share SubtitlesOffEnglishview videoSuggested ReadingThe Week In Games: Pokémon With Guns And More New Releases Share SubtitlesOffEnglishIn a discussion with the Sacred Symbols podcast, Druckmann talked about the end of The Last of Us Part I, which was adapted for television in the HBO show’s first season. In this climactic moment, Joel—a smuggler turned surrogate father to Ellie, the young girl immune to a fungal infection that has leveled the series’ post-apocalyptic world—massacres members of a revolutionary group called the Fireflies who sought to use Ellie’s immunity to create a vaccine. They could potentially have saved millions of lives and helped society rebuild again after decades of ruin. But after months of traveling across the United States to reach the group’s base in Salt Lake City, Joel wasn’t willing to lose Ellie for something as small as the possibility of a world-saving vaccine. After the player fights their way through the facility and escapes with Ellie, Joel lies to her about what happened, and they live happily ever afterin Jackson, Wyoming…until the sequel, at least.It’s a nuanced situation, and ever since The Last of Us launched in 2013, fans have debated the ethics of pretty much every character in this finale. However, one part of the discussion that has persisted is the question of whether or not the Fireflies would have been able to successfully create a cure or vaccine in the first place. This is the post-apocalypse. They’ve got one surgeon here who claims to be able to do the job, and even if they managed to concoct a vaccine, how would they distribute it? All of that is an interesting logistical discussion, but some fans have taken that talking point a step further and tried to claim the potential success of the plan was ever part of Joel’s motivations. It’s very obvious that the man cares about Ellie’s life above all else, and didn’t stop to weigh up the vagueries of vaccine efficacy in a zombie apocalypse I’ve always read these attempts to explain away Joel’s guilt as cope, and the idea that he had a firm confidence that the Fireflies’ attempts would fail as an effort to wash away the reality of Joel’s actions.Yet now, Druckmann has confirmed that it was always the intention for the group’s medical team to be able to create the cure, essentially nuking that talking point. Am I upset that we can now put this obviously desperate theory to bed? No. Do I wish Druckmann would stop giving definitive answers to a story that has thrived in ambiguity and interpretation? Absolutely.“Our intent was yes, they could,” Druckmann said. “Now, is our science a little shaky that now people are now questioning it? Sure. Our science is a little shaky and people are now questioning it. I can’t say anything. I can say our intent was that they would have made a cure. That makes the most interesting philosophical question for what Joel does.”Sure, it’s the most interesting interpretation because it actually interrogates everything you know about Joel based on how he presents. Giving him an out is just refusing to engage with the text. Do you want to debate if the Fireflies were equipped to save the world with a vaccine? That’s an entirely separate discussion from Joel’s motivations. But even so, we don’t need every detail spelt out. Maybe it’s because Druckmann is being constantly interviewed about this series after two games, more remasters than I care to count, and two seasons of television, but the more we talk to this dudeand ask him to tell us what it all meant, the less interesting the story can be. I don’t want to know if Druckmann thinks Joel was right. I don’t need the author to tell me what I’m supposed to feel. It’s a major reason why the show bothers me so much, because it loves to tell you what lessons you’re supposed to learn, rather than giving you a second to consider what you feel about it.I wish we lived in a world where The Last of Us’ marketing got to be as bold as the games. Just let it speak for itself.. #wish #neil #druckmann #would #stop
    KOTAKU.COM
    I Wish Neil Druckmann Would Stop Confirming Things About The Last Of Us
    You might not know it based on my scathing recaps of The Last of Us’ second season, but I love this series. I love the moral conundrums it presents, the violent grief it depicts, and the games’ excellent writing that poignantly brings all of those complicated emotions to the surface. What I don’t like is listening to pretty much any of the creative team talk about the series, especially when it comes to weighing in on decade-long discourse around its complex storylines. Even when I agree with series director Neil Druckmann’s interpretation of something, we’d all rather he just let bad readings fester in the corners of the internet than tell us exactly what something means. Nevertheless, he continues to do so in interviews.Suggested ReadingThe Week In Games: Pokémon With Guns And More New Releases Share SubtitlesOffEnglishview videoSuggested ReadingThe Week In Games: Pokémon With Guns And More New Releases Share SubtitlesOffEnglishIn a discussion with the Sacred Symbols podcast (thanks, IGN), Druckmann talked about the end of The Last of Us Part I, which was adapted for television in the HBO show’s first season. In this climactic moment, Joel—a smuggler turned surrogate father to Ellie, the young girl immune to a fungal infection that has leveled the series’ post-apocalyptic world—massacres members of a revolutionary group called the Fireflies who sought to use Ellie’s immunity to create a vaccine. They could potentially have saved millions of lives and helped society rebuild again after decades of ruin. But after months of traveling across the United States to reach the group’s base in Salt Lake City, Joel wasn’t willing to lose Ellie for something as small as the possibility of a world-saving vaccine. After the player fights their way through the facility and escapes with Ellie, Joel lies to her about what happened, and they live happily ever after(?) in Jackson, Wyoming…until the sequel, at least.It’s a nuanced situation, and ever since The Last of Us launched in 2013, fans have debated the ethics of pretty much every character in this finale. However, one part of the discussion that has persisted is the question of whether or not the Fireflies would have been able to successfully create a cure or vaccine in the first place. This is the post-apocalypse. They’ve got one surgeon here who claims to be able to do the job, and even if they managed to concoct a vaccine, how would they distribute it? All of that is an interesting logistical discussion, but some fans have taken that talking point a step further and tried to claim the potential success of the plan was ever part of Joel’s motivations. It’s very obvious that the man cares about Ellie’s life above all else, and didn’t stop to weigh up the vagueries of vaccine efficacy in a zombie apocalypse I’ve always read these attempts to explain away Joel’s guilt as cope, and the idea that he had a firm confidence that the Fireflies’ attempts would fail as an effort to wash away the reality of Joel’s actions.Yet now, Druckmann has confirmed that it was always the intention for the group’s medical team to be able to create the cure, essentially nuking that talking point. Am I upset that we can now put this obviously desperate theory to bed? No. Do I wish Druckmann would stop giving definitive answers to a story that has thrived in ambiguity and interpretation? Absolutely.“Our intent was yes, they could [make a cure],” Druckmann said. “Now, is our science a little shaky that now people are now questioning it? Sure. Our science is a little shaky and people are now questioning it. I can’t say anything. I can say our intent was that they would have made a cure. That makes the most interesting philosophical question for what Joel does.”Sure, it’s the most interesting interpretation because it actually interrogates everything you know about Joel based on how he presents. Giving him an out is just refusing to engage with the text. Do you want to debate if the Fireflies were equipped to save the world with a vaccine? That’s an entirely separate discussion from Joel’s motivations. But even so, we don’t need every detail spelt out. Maybe it’s because Druckmann is being constantly interviewed about this series after two games, more remasters than I care to count, and two seasons of television, but the more we talk to this dude (and HBO series showrunner Craig Mazin) and ask him to tell us what it all meant, the less interesting the story can be. I don’t want to know if Druckmann thinks Joel was right. I don’t need the author to tell me what I’m supposed to feel. It’s a major reason why the show bothers me so much, because it loves to tell you what lessons you’re supposed to learn, rather than giving you a second to consider what you feel about it.I wish we lived in a world where The Last of Us’ marketing got to be as bold as the games. Just let it speak for itself..
    0 Comments 0 Shares
  • I Write a Podcast Newsletter, and These Are My Favorite New Shows of 2025 (So Far)

    All throughout 2025, I've been bursting with podcast recommendations. I've shared my lists of the best podcasts about liars and scammers, podcasts that expose the nonsense in politics and pop culture, and podcasts you'll like if you miss Heavyweight. But then I awoke and realized that we are almost halfway through the year, and I haven't spent nearly enough time talking about my favorite new shows that debuted this year. June is a great time to take stock of all the new podcasts from the first half of the year. These are the shows that made my jaw drop, made me laugh, and inspired me to subscribe—and pester all of my friends to do the same. I think you'll like them, too. Alternate RealitiesCredit: Podcast logo

    Embedded recently produced a 3-part series, Alternate Realities, focused on a bet between reporter Zach Mack and his father, who intended to determine once and for all who was right about the other having been lost to conspiracy theories. Zach’s father had started to believe in chemtrails, that the government controls the weather, that ANTIFA staged the Jan. 6 riots, that a cabal called the globalists is controlling the world. Zach…did not believe those things. In early 2024 the two agreed: Zach’s dad would make a list of 10 prophesies that he was 100% sure would happen,, and on Jan. 1, 2025, Zach would have to give his father for every one that came to pass. For every one that didn’t, Zach would get the same. It’s a zingy idea for a series, but also a dark family story—the bet is the make-or-break thing for not just Zach and his dad, but for the entire family. Beyond the money, the stakes are high.Debt Heads

    Credit: Podcast logo

    Friends Jamie Alyson Feldmanand Rachel Gayle Websterare using storytelling, research, springiness, humor, and fun audio elements in their podcast Debt Heads, which examines Jamie’s deeply ingrained issues with debt and uses them as an entry point into the question of why so many young people are in the same boat. It's a fascinating dive into the issue of millennials and their money—harrowing and fascinating and occasionally funny, and a rich listening experience even if youwant to crawl under a table when the conversation turns to money.Our Ancestors Were Messy

    Credit: Podcast logo

    If you love the way Normal Gossip pulls you into the juicy drama of strangers, and especially if you also love history, you’ll get sucked right in to Our Ancestors Were Messy, Nichole Hill’s show about the gossip, scandals, and pop culture that made headlines in historical Black newspapers across America. Nichole tells true stories from the pastwith help from a guest, placing you inside of a vintage scandal, providing the context you need to understand why it was a scandal at all, and fleshing out the characters involved with the skill of a novelist. Nichole’s storytelling is descriptive, funny, conversational, and crisp, and she uses amazing sound production that pumps it all into life. Why Is Amy in the Bath?

    Credit: Podcast logo

    Have you ever noticed that Amy Adams seems to do a lot of bathtub and shower scenes in her films? After listening to this show, you won’t be able to un-notice it. Certainly that fact stuck out to Brandon R. Reynolds and Gabby Lombardo, who spun the observation into the podcast Why Is Amy in the Bath? In six episodes they ask: Is Amy, who has never won an Oscar, doing all these bathtub scenes because they offer the opportunity for the kind of dramatic acting that earns the biggest, golden-est prizes? Brandon and Gabby went through 1,500 movies, including all the Best Actress Oscar nominees, to see if there was a correlation to tub scenes, and their conclusions are the stuff of the best conspiracy theories.What We Spend

    Credit: Podcast logo

    If you love Refinery 29's Money Diaries, or if you’re just a nosy person, you’re going to salivate over What We Spend, in which regular people take us, day by day and purchase by purchase, through what they spend in a week. It's like looking inside their wallets, flipping through their credit card statements, and hearing the personal stories behind the financial decisions they make. One person is scared about having to pay for a cat funeral. A 35-year-old asks her dad to pay her bills for a month. In each episode, the subject realizes, along with us, that there are usually deeply rooted personal issues underneath their money issues and the anxieties they bring up. Listeners can contact the hosts for a spot on the show, but that's a huge no thanks from me! But I’ll be listening. Text Me Back

    Credit: Podcast logo

    If you’re looking for a chat show that will have you laughing out loud without making you feel like you just lost a bunch of brain cells, try Text Me Back. Bestselling writer Lindy West and democracy policy expert Meagan Hatcher-Maysget are childhood friends who get on the mic for convos that range from off the rails goofy stories to insightful pop-culture and political commentary, with an irresistible friendship vibe flowing throughout. Their chemistry is nothing that could be rehearsed or planned, and they are both such good storytellers, they can spin gold out of the most mundane things that happened to them in a given week. Text Me Back will be a balm for listeners who still miss the iconic podcast Call Your GirlfriendThe Final Days of Sgt. Tibbs

    Credit: Podcast logo

    Delivered in four short episodes, The Final Days of Sgt. Tibbs explores the fate of the titular geriatric cat, who went missing in Manchester, New Hampshire, then turned up dead, causing a huge blowup in the community he left behind. Rose, Sgt. Tibbs’ owner, was devastated when Tibbs went missing, and infuriated to learn that he might not have actually been missing at all, but in the hands of neighbors, the mother/daughter duo of Debbie and Sabrina, who claim to have saved the cat's life. We going in knowing that Tibbs has died. The question is, what happened? Todd Bookman puts a microscope to the kitty's last days, and finds a story of adults behaving badly and a community torn apart. At one point, Todd wonders if there are better things he could be doing with his time. “But imagine something more important than something you love disappearing and dying," he says. "It seems worth every second trying to figure out what happened.” Pet lovers get it. RIP, Sgt. Tibbs.We Came to the Forest

    Credit: Podcast logo

    We Came to the Forest introduces you to Vienna Forrest, an environmental crusader remembering her life living in the forest with a bunch of other activists as they protested the construction of Atlanta’s Cop City, one of the biggest police training facilities in the country. She speaks intimately about her partner Tortuguitaanother protester or “forest defender” who was allegedly shot and killed by Atlanta law enforcement. We Came to the Forest revolves around Tortuguita’s murder and everything that led up to it. What seems obviousis tough to prove. A cop was also shot, but who shot him? There is no body cam footage to prove what happened. Through storytelling and interviews, the show will make you think about how fast things can turn sideways when law enforcement gets involved in a situation, and how thin the line can be between safety and danger.CRAMPED

    Credit: Podcast logo

    Kate Downey has been having debilitating period pain every month since she was14 years old. Debilitating period pain is common, yet something nobody seems to want to talk about or research—and certainly nobody is trying to have fun with it. But Kate is doing all of the above with CRAMPED, which is somehow boisterous and dead serious at the same time. It's full of fascinating interviews, illuminating info, and helpful tips for anyone with a uterus. She gets smart, funny people on the mic to talk about their that-time-of-the-month experiences, what is really going on in their bodies and why nobody cares, and why Kate hasn’t been able to get an answers from a doctor after 20 years of asking questions. SuaveCredit: Podcast logo

    In its first season, Suave won a Pulitzer Prize-winning for telling the story of Luis "Suave" Gonzalez, a convicted man who turned his life around in prison, and his relationship with journalist Maria Hinojosa. The show is assembled from years of recordings of their conversations, an audio document of the highs and lows of Suave's life both in and out of jail, and the mother/son bond that develops between the two. At the end, Suave is released, and we are left to wonder what freedom really means. That’s where season two picks up: Suave is now “Mr. Pulitzer,” but life on the outside is very hard. Proxy

    Credit: Podcast logo

    With her beautiful show Proxy, "emotional journalist" Yowei Shaw investigates and solves deeply intimate conundrums by proxy—she finds people with unresolved relationship issues and links them up with a stranger who can help them better understand what's going on.Yowei also appears on the massively popular NPR podcast Invisibilia, so you know you can trust her to deliver a good story that will be professionally structured. It's a space for unique conversations the likes of which I have never heard before. Sea of LiesCredit: Podcast logo

    On Sea of LiesSam Mullinstells the tale of one of the most wanted men in the world, Albert Walker, who is arrested for fraud after a dead body wearing a recognizable watch washes ashore. The globe-spanning saga gets wilder from there, always zagging left when you think it will go right. Via meticulous reporting, Sea of Lies skirts around Walker’s manipulative tactics to get to the psychological questions at the root of his crimes. 
    #write #podcast #newsletter #these #are
    I Write a Podcast Newsletter, and These Are My Favorite New Shows of 2025 (So Far)
    All throughout 2025, I've been bursting with podcast recommendations. I've shared my lists of the best podcasts about liars and scammers, podcasts that expose the nonsense in politics and pop culture, and podcasts you'll like if you miss Heavyweight. But then I awoke and realized that we are almost halfway through the year, and I haven't spent nearly enough time talking about my favorite new shows that debuted this year. June is a great time to take stock of all the new podcasts from the first half of the year. These are the shows that made my jaw drop, made me laugh, and inspired me to subscribe—and pester all of my friends to do the same. I think you'll like them, too. Alternate RealitiesCredit: Podcast logo Embedded recently produced a 3-part series, Alternate Realities, focused on a bet between reporter Zach Mack and his father, who intended to determine once and for all who was right about the other having been lost to conspiracy theories. Zach’s father had started to believe in chemtrails, that the government controls the weather, that ANTIFA staged the Jan. 6 riots, that a cabal called the globalists is controlling the world. Zach…did not believe those things. In early 2024 the two agreed: Zach’s dad would make a list of 10 prophesies that he was 100% sure would happen,, and on Jan. 1, 2025, Zach would have to give his father for every one that came to pass. For every one that didn’t, Zach would get the same. It’s a zingy idea for a series, but also a dark family story—the bet is the make-or-break thing for not just Zach and his dad, but for the entire family. Beyond the money, the stakes are high.Debt Heads Credit: Podcast logo Friends Jamie Alyson Feldmanand Rachel Gayle Websterare using storytelling, research, springiness, humor, and fun audio elements in their podcast Debt Heads, which examines Jamie’s deeply ingrained issues with debt and uses them as an entry point into the question of why so many young people are in the same boat. It's a fascinating dive into the issue of millennials and their money—harrowing and fascinating and occasionally funny, and a rich listening experience even if youwant to crawl under a table when the conversation turns to money.Our Ancestors Were Messy Credit: Podcast logo If you love the way Normal Gossip pulls you into the juicy drama of strangers, and especially if you also love history, you’ll get sucked right in to Our Ancestors Were Messy, Nichole Hill’s show about the gossip, scandals, and pop culture that made headlines in historical Black newspapers across America. Nichole tells true stories from the pastwith help from a guest, placing you inside of a vintage scandal, providing the context you need to understand why it was a scandal at all, and fleshing out the characters involved with the skill of a novelist. Nichole’s storytelling is descriptive, funny, conversational, and crisp, and she uses amazing sound production that pumps it all into life. Why Is Amy in the Bath? Credit: Podcast logo Have you ever noticed that Amy Adams seems to do a lot of bathtub and shower scenes in her films? After listening to this show, you won’t be able to un-notice it. Certainly that fact stuck out to Brandon R. Reynolds and Gabby Lombardo, who spun the observation into the podcast Why Is Amy in the Bath? In six episodes they ask: Is Amy, who has never won an Oscar, doing all these bathtub scenes because they offer the opportunity for the kind of dramatic acting that earns the biggest, golden-est prizes? Brandon and Gabby went through 1,500 movies, including all the Best Actress Oscar nominees, to see if there was a correlation to tub scenes, and their conclusions are the stuff of the best conspiracy theories.What We Spend Credit: Podcast logo If you love Refinery 29's Money Diaries, or if you’re just a nosy person, you’re going to salivate over What We Spend, in which regular people take us, day by day and purchase by purchase, through what they spend in a week. It's like looking inside their wallets, flipping through their credit card statements, and hearing the personal stories behind the financial decisions they make. One person is scared about having to pay for a cat funeral. A 35-year-old asks her dad to pay her bills for a month. In each episode, the subject realizes, along with us, that there are usually deeply rooted personal issues underneath their money issues and the anxieties they bring up. Listeners can contact the hosts for a spot on the show, but that's a huge no thanks from me! But I’ll be listening. Text Me Back Credit: Podcast logo If you’re looking for a chat show that will have you laughing out loud without making you feel like you just lost a bunch of brain cells, try Text Me Back. Bestselling writer Lindy West and democracy policy expert Meagan Hatcher-Maysget are childhood friends who get on the mic for convos that range from off the rails goofy stories to insightful pop-culture and political commentary, with an irresistible friendship vibe flowing throughout. Their chemistry is nothing that could be rehearsed or planned, and they are both such good storytellers, they can spin gold out of the most mundane things that happened to them in a given week. Text Me Back will be a balm for listeners who still miss the iconic podcast Call Your GirlfriendThe Final Days of Sgt. Tibbs Credit: Podcast logo Delivered in four short episodes, The Final Days of Sgt. Tibbs explores the fate of the titular geriatric cat, who went missing in Manchester, New Hampshire, then turned up dead, causing a huge blowup in the community he left behind. Rose, Sgt. Tibbs’ owner, was devastated when Tibbs went missing, and infuriated to learn that he might not have actually been missing at all, but in the hands of neighbors, the mother/daughter duo of Debbie and Sabrina, who claim to have saved the cat's life. We going in knowing that Tibbs has died. The question is, what happened? Todd Bookman puts a microscope to the kitty's last days, and finds a story of adults behaving badly and a community torn apart. At one point, Todd wonders if there are better things he could be doing with his time. “But imagine something more important than something you love disappearing and dying," he says. "It seems worth every second trying to figure out what happened.” Pet lovers get it. RIP, Sgt. Tibbs.We Came to the Forest Credit: Podcast logo We Came to the Forest introduces you to Vienna Forrest, an environmental crusader remembering her life living in the forest with a bunch of other activists as they protested the construction of Atlanta’s Cop City, one of the biggest police training facilities in the country. She speaks intimately about her partner Tortuguitaanother protester or “forest defender” who was allegedly shot and killed by Atlanta law enforcement. We Came to the Forest revolves around Tortuguita’s murder and everything that led up to it. What seems obviousis tough to prove. A cop was also shot, but who shot him? There is no body cam footage to prove what happened. Through storytelling and interviews, the show will make you think about how fast things can turn sideways when law enforcement gets involved in a situation, and how thin the line can be between safety and danger.CRAMPED Credit: Podcast logo Kate Downey has been having debilitating period pain every month since she was14 years old. Debilitating period pain is common, yet something nobody seems to want to talk about or research—and certainly nobody is trying to have fun with it. But Kate is doing all of the above with CRAMPED, which is somehow boisterous and dead serious at the same time. It's full of fascinating interviews, illuminating info, and helpful tips for anyone with a uterus. She gets smart, funny people on the mic to talk about their that-time-of-the-month experiences, what is really going on in their bodies and why nobody cares, and why Kate hasn’t been able to get an answers from a doctor after 20 years of asking questions. SuaveCredit: Podcast logo In its first season, Suave won a Pulitzer Prize-winning for telling the story of Luis "Suave" Gonzalez, a convicted man who turned his life around in prison, and his relationship with journalist Maria Hinojosa. The show is assembled from years of recordings of their conversations, an audio document of the highs and lows of Suave's life both in and out of jail, and the mother/son bond that develops between the two. At the end, Suave is released, and we are left to wonder what freedom really means. That’s where season two picks up: Suave is now “Mr. Pulitzer,” but life on the outside is very hard. Proxy Credit: Podcast logo With her beautiful show Proxy, "emotional journalist" Yowei Shaw investigates and solves deeply intimate conundrums by proxy—she finds people with unresolved relationship issues and links them up with a stranger who can help them better understand what's going on.Yowei also appears on the massively popular NPR podcast Invisibilia, so you know you can trust her to deliver a good story that will be professionally structured. It's a space for unique conversations the likes of which I have never heard before. Sea of LiesCredit: Podcast logo On Sea of LiesSam Mullinstells the tale of one of the most wanted men in the world, Albert Walker, who is arrested for fraud after a dead body wearing a recognizable watch washes ashore. The globe-spanning saga gets wilder from there, always zagging left when you think it will go right. Via meticulous reporting, Sea of Lies skirts around Walker’s manipulative tactics to get to the psychological questions at the root of his crimes.  #write #podcast #newsletter #these #are
    LIFEHACKER.COM
    I Write a Podcast Newsletter, and These Are My Favorite New Shows of 2025 (So Far)
    All throughout 2025, I've been bursting with podcast recommendations (which might not be surprising, given writing a podcast recommendation newsletter is part of my job). I've shared my lists of the best podcasts about liars and scammers, podcasts that expose the nonsense in politics and pop culture, and podcasts you'll like if you miss Heavyweight. But then I awoke and realized that we are almost halfway through the year, and I haven't spent nearly enough time talking about my favorite new shows that debuted this year. June is a great time to take stock of all the new podcasts from the first half of the year. These are the shows that made my jaw drop, made me laugh, and inspired me to subscribe—and pester all of my friends to do the same. I think you'll like them, too. Alternate Realities (Embedded) Credit: Podcast logo Embedded recently produced a 3-part series, Alternate Realities, focused on a bet between reporter Zach Mack and his father, who intended to determine once and for all who was right about the other having been lost to conspiracy theories. Zach’s father had started to believe in chemtrails, that the government controls the weather, that ANTIFA staged the Jan. 6 riots, that a cabal called the globalists is controlling the world. Zach…did not believe those things. In early 2024 the two agreed: Zach’s dad would make a list of 10 prophesies that he was 100% sure would happen, (i.e. a bunch of democrats would be convicted of treason and/or murder, the U.S. would come under martial law), and on Jan. 1, 2025, Zach would have to give his father $1,000 for every one that came to pass. For every one that didn’t, Zach would get the same. It’s a zingy idea for a series, but also a dark family story—the bet is the make-or-break thing for not just Zach and his dad, but for the entire family. Beyond the money, the stakes are high.Debt Heads Credit: Podcast logo Friends Jamie Alyson Feldman (@realgirlproject) and Rachel Gayle Webster (@webbythefox) are using storytelling, research, springiness, humor, and fun audio elements in their podcast Debt Heads, which examines Jamie’s deeply ingrained issues with debt and uses them as an entry point into the question of why so many young people are in the same boat. It's a fascinating dive into the issue of millennials and their money—harrowing and fascinating and occasionally funny, and a rich listening experience even if you (like me) want to crawl under a table when the conversation turns to money.Our Ancestors Were Messy Credit: Podcast logo If you love the way Normal Gossip pulls you into the juicy drama of strangers, and especially if you also love history, you’ll get sucked right in to Our Ancestors Were Messy, Nichole Hill’s show about the gossip, scandals, and pop culture that made headlines in historical Black newspapers across America. Nichole tells true stories from the past (a Victorian-era love triangle that hit DC elites, a mystery concerning a tabloid sensation in Harlem) with help from a guest, placing you inside of a vintage scandal, providing the context you need to understand why it was a scandal at all, and fleshing out the characters involved with the skill of a novelist. Nichole’s storytelling is descriptive, funny, conversational, and crisp, and she uses amazing sound production that pumps it all into life. Why Is Amy in the Bath? Credit: Podcast logo Have you ever noticed that Amy Adams seems to do a lot of bathtub and shower scenes in her films? After listening to this show, you won’t be able to un-notice it. Certainly that fact stuck out to Brandon R. Reynolds and Gabby Lombardo, who spun the observation into the podcast Why Is Amy in the Bath? In six episodes they ask: Is Amy, who has never won an Oscar, doing all these bathtub scenes because they offer the opportunity for the kind of dramatic acting that earns the biggest, golden-est prizes? Brandon and Gabby went through 1,500 movies, including all the Best Actress Oscar nominees, to see if there was a correlation to tub scenes, and their conclusions are the stuff of the best conspiracy theories.What We Spend Credit: Podcast logo If you love Refinery 29's Money Diaries, or if you’re just a nosy person, you’re going to salivate over What We Spend, in which regular people take us, day by day and purchase by purchase, through what they spend in a week. It's like looking inside their wallets, flipping through their credit card statements, and hearing the personal stories behind the financial decisions they make. One person is scared about having to pay for a cat funeral. A 35-year-old asks her dad to pay her bills for a month. In each episode, the subject realizes, along with us, that there are usually deeply rooted personal issues underneath their money issues and the anxieties they bring up. Listeners can contact the hosts for a spot on the show, but that's a huge no thanks from me! But I’ll be listening. Text Me Back Credit: Podcast logo If you’re looking for a chat show that will have you laughing out loud without making you feel like you just lost a bunch of brain cells, try Text Me Back. Bestselling writer Lindy West and democracy policy expert Meagan Hatcher-Maysget are childhood friends who get on the mic for convos that range from off the rails goofy stories to insightful pop-culture and political commentary, with an irresistible friendship vibe flowing throughout. Their chemistry is nothing that could be rehearsed or planned, and they are both such good storytellers, they can spin gold out of the most mundane things that happened to them in a given week. Text Me Back will be a balm for listeners who still miss the iconic podcast Call Your Girlfriend (RIP.) The Final Days of Sgt. Tibbs Credit: Podcast logo Delivered in four short episodes, The Final Days of Sgt. Tibbs explores the fate of the titular geriatric cat, who went missing in Manchester, New Hampshire, then turned up dead, causing a huge blowup in the community he left behind. Rose, Sgt. Tibbs’ owner, was devastated when Tibbs went missing, and infuriated to learn that he might not have actually been missing at all, but in the hands of neighbors, the mother/daughter duo of Debbie and Sabrina, who claim to have saved the cat's life. We going in knowing that Tibbs has died. The question is, what happened? Todd Bookman puts a microscope to the kitty's last days, and finds a story of adults behaving badly and a community torn apart. At one point, Todd wonders if there are better things he could be doing with his time (and microphone). “But imagine something more important than something you love disappearing and dying," he says. "It seems worth every second trying to figure out what happened.” Pet lovers get it. RIP, Sgt. Tibbs.We Came to the Forest Credit: Podcast logo We Came to the Forest introduces you to Vienna Forrest, an environmental crusader remembering her life living in the forest with a bunch of other activists as they protested the construction of Atlanta’s Cop City, one of the biggest police training facilities in the country. She speaks intimately about her partner Tortuguita (Manuel Esteban Paez Terán,) another protester or “forest defender” who was allegedly shot and killed by Atlanta law enforcement. We Came to the Forest revolves around Tortuguita’s murder and everything that led up to it. What seems obvious (Tortuguita was shot by the police) is tough to prove. A cop was also shot, but who shot him? There is no body cam footage to prove what happened. Through storytelling and interviews, the show will make you think about how fast things can turn sideways when law enforcement gets involved in a situation, and how thin the line can be between safety and danger.CRAMPED Credit: Podcast logo Kate Downey has been having debilitating period pain every month since she was14 years old. Debilitating period pain is common, yet something nobody seems to want to talk about or research—and certainly nobody is trying to have fun with it. But Kate is doing all of the above with CRAMPED, which is somehow boisterous and dead serious at the same time. It's full of fascinating interviews, illuminating info, and helpful tips for anyone with a uterus. She gets smart, funny people on the mic to talk about their that-time-of-the-month experiences, what is really going on in their bodies and why nobody cares, and why Kate hasn’t been able to get an answers from a doctor after 20 years of asking questions. Suave (Season 2) Credit: Podcast logo In its first season, Suave won a Pulitzer Prize-winning for telling the story of Luis "Suave" Gonzalez, a convicted man who turned his life around in prison, and his relationship with journalist Maria Hinojosa. The show is assembled from years of recordings of their conversations, an audio document of the highs and lows of Suave's life both in and out of jail, and the mother/son bond that develops between the two. At the end, Suave is released, and we are left to wonder what freedom really means. That’s where season two picks up: Suave is now “Mr. Pulitzer,” but life on the outside is very hard. Proxy Credit: Podcast logo With her beautiful show Proxy, "emotional journalist" Yowei Shaw investigates and solves deeply intimate conundrums by proxy—she finds people with unresolved relationship issues and links them up with a stranger who can help them better understand what's going on. (Recently she connected a man whose wife left him for a woman with a woman who'd left her husband for a woman.) Yowei also appears on the massively popular NPR podcast Invisibilia, so you know you can trust her to deliver a good story that will be professionally structured. It's a space for unique conversations the likes of which I have never heard before. Sea of Lies (Uncover) Credit: Podcast logo On Sea of Lies (available on the Uncover podcast feed) Sam Mullins (Wild Boys) tells the tale of one of the most wanted men in the world, Albert Walker, who is arrested for fraud after a dead body wearing a recognizable watch washes ashore. The globe-spanning saga gets wilder from there, always zagging left when you think it will go right. Via meticulous reporting, Sea of Lies skirts around Walker’s manipulative tactics to get to the psychological questions at the root of his crimes. 
    0 Comments 0 Shares
CGShares https://cgshares.com