• Why Designers Get Stuck In The Details And How To Stop

    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar?
    In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap.
    Reason #1 You’re Afraid To Show Rough Work
    We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed.
    I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them.
    The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief.
    The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem.
    So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychologyshows there are a couple of flavors driving this:

    Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den.
    Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off.

    Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback.
    Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift:
    Treat early sketches as disposable tools for thinking and actively share them to get feedback faster.

    Reason #2: You Fix The Symptom, Not The Cause
    Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data.
    From my experience, here are several reasons why users might not be clicking that coveted button:

    Users don’t understand that this step is for payment.
    They understand it’s about payment but expect order confirmation first.
    Due to incorrect translation, users don’t understand what the button means.
    Lack of trust signals.
    Unexpected additional coststhat appear at this stage.
    Technical issues.

    Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly.
    Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button.
    Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers— and understanding that using our product logic expertise proactively is crucial for modern designers.
    There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers.
    Reason #3: You’re Solving The Wrong Problem
    Before solving anything, ask whether the problem even deserves your attention.
    During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B testsshowed minimal impact, we continued to tweak those buttons.
    Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned:
    Without the right context, any visual tweak is lipstick on a pig.

    Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising.
    It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours.
    Reason #4: You’re Drowning In Unactionable Feedback
    We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow.
    What matters here are two things:

    The question you ask,
    The context you give.

    That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it.
    For instance:
    “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?”

    Here, you’ve stated the problem, shared your insight, explained your solution, and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?”
    Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside.
    I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory.
    So, to wrap up this point, here are two recommendations:

    Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”.
    Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it.

    Reason #5 You’re Just Tired
    Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing.
    A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the daycompared to late in the daysimply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity.
    What helps here:

    Swap tasks.Trade tickets with another designer; novelty resets your focus.
    Talk to another designer.If NDA permits, ask peers outside the team for a sanity check.
    Step away.Even a ten‑minute walk can do more than a double‑shot espresso.

    By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit.

    And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time.
    Four Steps I Use to Avoid Drowning In Detail
    Knowing these potential traps, here’s the practical process I use to stay on track:
    1. Define the Core Problem & Business Goal
    Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream.
    2. Choose the MechanicOnce the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels.
    3. Wireframe the Flow & Get Focused Feedback
    Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear contextto get actionable feedback, not just vague opinions.
    4. Polish the VisualsI only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution.
    Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering.
    Wrapping Up
    Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution.
    Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink.
    #why #designers #get #stuck #details
    Why Designers Get Stuck In The Details And How To Stop
    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar? In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap. Reason #1 You’re Afraid To Show Rough Work We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed. I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them. The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief. The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem. So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychologyshows there are a couple of flavors driving this: Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den. Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off. Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback. Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift: Treat early sketches as disposable tools for thinking and actively share them to get feedback faster. Reason #2: You Fix The Symptom, Not The Cause Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data. From my experience, here are several reasons why users might not be clicking that coveted button: Users don’t understand that this step is for payment. They understand it’s about payment but expect order confirmation first. Due to incorrect translation, users don’t understand what the button means. Lack of trust signals. Unexpected additional coststhat appear at this stage. Technical issues. Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly. Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button. Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers— and understanding that using our product logic expertise proactively is crucial for modern designers. There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers. Reason #3: You’re Solving The Wrong Problem Before solving anything, ask whether the problem even deserves your attention. During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B testsshowed minimal impact, we continued to tweak those buttons. Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned: Without the right context, any visual tweak is lipstick on a pig. Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising. It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours. Reason #4: You’re Drowning In Unactionable Feedback We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow. What matters here are two things: The question you ask, The context you give. That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it. For instance: “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?” Here, you’ve stated the problem, shared your insight, explained your solution, and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?” Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside. I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory. So, to wrap up this point, here are two recommendations: Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”. Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it. Reason #5 You’re Just Tired Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing. A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the daycompared to late in the daysimply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity. What helps here: Swap tasks.Trade tickets with another designer; novelty resets your focus. Talk to another designer.If NDA permits, ask peers outside the team for a sanity check. Step away.Even a ten‑minute walk can do more than a double‑shot espresso. By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit. And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time. Four Steps I Use to Avoid Drowning In Detail Knowing these potential traps, here’s the practical process I use to stay on track: 1. Define the Core Problem & Business Goal Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream. 2. Choose the MechanicOnce the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels. 3. Wireframe the Flow & Get Focused Feedback Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear contextto get actionable feedback, not just vague opinions. 4. Polish the VisualsI only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution. Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering. Wrapping Up Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution. Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink. #why #designers #get #stuck #details
    SMASHINGMAGAZINE.COM
    Why Designers Get Stuck In The Details And How To Stop
    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar? In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap. Reason #1 You’re Afraid To Show Rough Work We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed. I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them. The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief. The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem. So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychology (like the research by Hewitt and Flett) shows there are a couple of flavors driving this: Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den. Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off. Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback. Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift: Treat early sketches as disposable tools for thinking and actively share them to get feedback faster. Reason #2: You Fix The Symptom, Not The Cause Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data. From my experience, here are several reasons why users might not be clicking that coveted button: Users don’t understand that this step is for payment. They understand it’s about payment but expect order confirmation first. Due to incorrect translation, users don’t understand what the button means. Lack of trust signals (no security icons, unclear seller information). Unexpected additional costs (hidden fees, shipping) that appear at this stage. Technical issues (inactive button, page freezing). Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly. Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button. Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers (which might come from a fear of speaking up or a desire to avoid challenging authority) — and understanding that using our product logic expertise proactively is crucial for modern designers. There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers. Reason #3: You’re Solving The Wrong Problem Before solving anything, ask whether the problem even deserves your attention. During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B tests (a method of comparing two versions of a design to determine which performs better) showed minimal impact, we continued to tweak those buttons. Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned: Without the right context, any visual tweak is lipstick on a pig. Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising. It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours. Reason #4: You’re Drowning In Unactionable Feedback We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow. What matters here are two things: The question you ask, The context you give. That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it. For instance: “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?” Here, you’ve stated the problem (conversion drop), shared your insight (user confusion), explained your solution (cost breakdown), and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?” Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside. I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory. So, to wrap up this point, here are two recommendations: Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”. Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it. Reason #5 You’re Just Tired Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing. A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the day (about 70% of cases) compared to late in the day (less than 10%) simply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity. What helps here: Swap tasks.Trade tickets with another designer; novelty resets your focus. Talk to another designer.If NDA permits, ask peers outside the team for a sanity check. Step away.Even a ten‑minute walk can do more than a double‑shot espresso. By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit. And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time. Four Steps I Use to Avoid Drowning In Detail Knowing these potential traps, here’s the practical process I use to stay on track: 1. Define the Core Problem & Business Goal Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream. 2. Choose the Mechanic (Solution Principle) Once the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels. 3. Wireframe the Flow & Get Focused Feedback Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear context (as discussed in ‘Reason #4’) to get actionable feedback, not just vague opinions. 4. Polish the Visuals (Mindfully) I only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution. Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering. Wrapping Up Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution. Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink.
    Like
    Love
    Wow
    Angry
    Sad
    596
    0 Комментарии 0 Поделились
  • A federal court’s novel proposal to rein in Trump’s power grab

    Limited-time offer: Get more than 30% off a Vox Membership. Join today to support independent journalism. Federal civil servants are supposed to enjoy robust protections against being fired or demoted for political reasons. But President Donald Trump has effectively stripped them of these protections by neutralizing the federal agencies that implement these safeguards.An agency known as the Merit Systems Protection Boardhears civil servants’ claims that a “government employer discriminated against them, retaliated against them for whistleblowing, violated protections for veterans, or otherwise subjected them to an unlawful adverse employment action or prohibited personnel practice,” as a federal appeals court explained in an opinion on Tuesday. But the three-member board currently lacks the quorum it needs to operate because Trump fired two of the members.Trump also fired Hampton Dellinger, who until recently served as the special counsel of the United States, a role that investigates alleged violations of federal civil service protections and brings related cases to the MSPB. Trump recently nominated Paul Ingrassia, a far-right podcaster and recent law school graduate to replace Dellinger.The upshot of these firings is that no one in the government is able to enforce laws and regulations protecting civil servants. As Dellinger noted in an interview, the morning before a federal appeals court determined that Trump could fire him, he’d “been able to get 6,000 newly hired federal employees back on the job,” and was working to get “all probationary employees put back on the jobtheir unlawful firing” by the Department of Government Efficiency and other Trump administration efforts to cull the federal workforce. These and other efforts to reinstate illegally fired federal workers are on hold, and may not resume until Trump leaves office.Which brings us to the US Court of Appeals for the Fourth Circuit’s decision in National Association of Immigration Judges v. Owen, which proposes an innovative solution to this problem.As the Owen opinion notes, the Supreme Court has held that the MSPB process is the only process a federal worker can use if they believe they’ve been fired in violation of federal civil service laws. So if that process is shut down, the worker is out of luck.But the Fourth Circuit’s Owen opinion argues that this “conclusion can only be true…when the statute functions as Congress intended.” That is, if the MSPB and the special counsel are unable to “fulfill their roles prescribed by” federal law, then the courts should pick up the slack and start hearing cases brought by illegally fired civil servants.For procedural reasons, the Fourth Circuit’s decision will not take effect right away — the court sent the case back down to a trial judge to “conduct a factual inquiry” into whether the MSPB continues to function. And, even after that inquiry is complete, the Trump administration is likely to appeal the Fourth Circuit’s decision to the Supreme Court if it wants to keep civil service protections on ice.If the justices agree with the circuit court, however, that will close a legal loophole that has left federal civil servants unprotected by laws that are still very much on the books. And it will cure a problem that the Supreme Court bears much of the blame for creating.The “unitary executive,” or why the Supreme Court is to blame for the loss of civil service protectionsFederal law provides that Dellinger could “be removed by the President only for inefficiency, neglect of duty, or malfeasance in office,” and members of the MSPB enjoy similar protections against being fired. Trump’s decision to fire these officials was illegal under these laws.But a federal appeals court nonetheless permitted Trump to fire Dellinger, and the Supreme Court recently backed Trump’s decision to fire the MSPB members as well. The reason is a legal theory known as the “unitary executive,” which is popular among Republican legal scholars, and especially among the six Republicans that control the Supreme Court.If you want to know all the details of this theory, I can point you to three different explainers I’ve written on the unitary executive. The short explanation is that the unitary executive theory claims that the president must have the power to fire top political appointees charged with executing federal laws – including officials who execute laws protecting civil servants from illegal firings.But the Supreme Court has never claimed that the unitary executive permits the president to fire any federal worker regardless of whether Congress has protected them or not. In a seminal opinion laying out the unitary executive theory, for example, Justice Antonin Scalia argued that the president must have the power to remove “principal officers” — high-ranking officials like Dellinger who must be nominated by the president and confirmed by the Senate. Under Scalia’s approach, lower-ranking government workers may still be given some protection.The Fourth Circuit cannot override the Supreme Court’s decision to embrace the unitary executive theory. But the Owen opinion essentially tries to police the line drawn by Scalia. The Supreme Court has given Trump the power to fire some high-ranking officials, but he shouldn’t be able to use that power as a back door to eliminate job protections for all civil servants.The Fourth Circuit suggests that the federal law which simultaneously gave the MSPB exclusive authority over civil service disputes, while also protecting MSPB members from being fired for political reasons, must be read as a package. Congress, this argument goes, would not have agreed to shunt all civil service disputes to the MSPB if it had known that the Supreme Court would strip the MSPB of its independence. And so, if the MSPB loses its independence, it must also lose its exclusive authority over civil service disputes — and federal courts must regain the power to hear those cases.It remains to be seen whether this argument persuades a Republican Supreme Court — all three of the Fourth Circuit judges who decided the Owen case are Democrats, and two are Biden appointees. But the Fourth Circuit’s reasoning closely resembles the kind of inquiry that courts frequently engage in when a federal law is struck down.When a court declares a provision of federal law unconstitutional, it often needs to ask whether other parts of the law should fall along with the unconstitutional provision, an inquiry known as “severability.” Often, this severability analysis asks which hypothetical law Congress would have enacted if it had known that the one provision is invalid.The Fourth Circuit’s decision in Owen is essentially a severability opinion. It takes as a given the Supreme Court’s conclusion that laws protecting Dellinger and the MSPB members from being fired are unconstitutional, then asks which law Congress would have enacted if it had known that it could not protect MSPB members from political reprisal. The Fourth Circuit’s conclusion is that, if Congress had known that MSPB members cannot be politically independent, then it would not have given them exclusive authority over civil service disputes.If the Supreme Court permits Trump to neutralize the MSPB, that would fundamentally change how the government functionsThe idea that civil servants should be hired based on merit and insulated from political pressure is hardly new. The first law protecting civil servants, the Pendleton Civil Service Reform Act, which President Chester A. Arthur signed into law in 1883.Laws like the Pendleton Act do more than protect civil servants who, say, resist pressure to deny government services to the president’s enemies. They also make it possible for top government officials to actually do their jobs.Before the Pendleton Act, federal jobs were typically awarded as patronage — so when a Democratic administration took office, the Republicans who occupied most federal jobs would be fired and replaced by Democrats. This was obviously quite disruptive, and it made it difficult for the government to hire highly specialized workers. Why would someone go to the trouble of earning an economics degree and becoming an expert on federal monetary policy, if they knew that their job in the Treasury Department would disappear the minute their party lost an election?Meanwhile, the task of filling all of these patronage jobs overwhelmed new presidents. As Candice Millard wrote in a 2011 biography of President James A. Garfield, the last president elected before the Pendleton Act, when Garfield took office, a line of job seekers began to form outside the White House “before he even sat down to breakfast.” By the time Garfield had eaten, this line “snaked down the front walk, out the gate, and onto Pennsylvania Avenue.” Garfield was assassinated by a disgruntled job seeker, a fact that likely helped build political support for the Pendleton Act.By neutralizing the MSPB, Trump is effectively undoing nearly 150 years worth of civil service reforms, and returning the federal government to a much more primitive state. At the very least, the Fourth Circuit’s decision in Owen is likely to force the Supreme Court to ask if it really wants a century and a half of work to unravel.See More:
    #federal #courts #novel #proposal #rein
    A federal court’s novel proposal to rein in Trump’s power grab
    Limited-time offer: Get more than 30% off a Vox Membership. Join today to support independent journalism. Federal civil servants are supposed to enjoy robust protections against being fired or demoted for political reasons. But President Donald Trump has effectively stripped them of these protections by neutralizing the federal agencies that implement these safeguards.An agency known as the Merit Systems Protection Boardhears civil servants’ claims that a “government employer discriminated against them, retaliated against them for whistleblowing, violated protections for veterans, or otherwise subjected them to an unlawful adverse employment action or prohibited personnel practice,” as a federal appeals court explained in an opinion on Tuesday. But the three-member board currently lacks the quorum it needs to operate because Trump fired two of the members.Trump also fired Hampton Dellinger, who until recently served as the special counsel of the United States, a role that investigates alleged violations of federal civil service protections and brings related cases to the MSPB. Trump recently nominated Paul Ingrassia, a far-right podcaster and recent law school graduate to replace Dellinger.The upshot of these firings is that no one in the government is able to enforce laws and regulations protecting civil servants. As Dellinger noted in an interview, the morning before a federal appeals court determined that Trump could fire him, he’d “been able to get 6,000 newly hired federal employees back on the job,” and was working to get “all probationary employees put back on the jobtheir unlawful firing” by the Department of Government Efficiency and other Trump administration efforts to cull the federal workforce. These and other efforts to reinstate illegally fired federal workers are on hold, and may not resume until Trump leaves office.Which brings us to the US Court of Appeals for the Fourth Circuit’s decision in National Association of Immigration Judges v. Owen, which proposes an innovative solution to this problem.As the Owen opinion notes, the Supreme Court has held that the MSPB process is the only process a federal worker can use if they believe they’ve been fired in violation of federal civil service laws. So if that process is shut down, the worker is out of luck.But the Fourth Circuit’s Owen opinion argues that this “conclusion can only be true…when the statute functions as Congress intended.” That is, if the MSPB and the special counsel are unable to “fulfill their roles prescribed by” federal law, then the courts should pick up the slack and start hearing cases brought by illegally fired civil servants.For procedural reasons, the Fourth Circuit’s decision will not take effect right away — the court sent the case back down to a trial judge to “conduct a factual inquiry” into whether the MSPB continues to function. And, even after that inquiry is complete, the Trump administration is likely to appeal the Fourth Circuit’s decision to the Supreme Court if it wants to keep civil service protections on ice.If the justices agree with the circuit court, however, that will close a legal loophole that has left federal civil servants unprotected by laws that are still very much on the books. And it will cure a problem that the Supreme Court bears much of the blame for creating.The “unitary executive,” or why the Supreme Court is to blame for the loss of civil service protectionsFederal law provides that Dellinger could “be removed by the President only for inefficiency, neglect of duty, or malfeasance in office,” and members of the MSPB enjoy similar protections against being fired. Trump’s decision to fire these officials was illegal under these laws.But a federal appeals court nonetheless permitted Trump to fire Dellinger, and the Supreme Court recently backed Trump’s decision to fire the MSPB members as well. The reason is a legal theory known as the “unitary executive,” which is popular among Republican legal scholars, and especially among the six Republicans that control the Supreme Court.If you want to know all the details of this theory, I can point you to three different explainers I’ve written on the unitary executive. The short explanation is that the unitary executive theory claims that the president must have the power to fire top political appointees charged with executing federal laws – including officials who execute laws protecting civil servants from illegal firings.But the Supreme Court has never claimed that the unitary executive permits the president to fire any federal worker regardless of whether Congress has protected them or not. In a seminal opinion laying out the unitary executive theory, for example, Justice Antonin Scalia argued that the president must have the power to remove “principal officers” — high-ranking officials like Dellinger who must be nominated by the president and confirmed by the Senate. Under Scalia’s approach, lower-ranking government workers may still be given some protection.The Fourth Circuit cannot override the Supreme Court’s decision to embrace the unitary executive theory. But the Owen opinion essentially tries to police the line drawn by Scalia. The Supreme Court has given Trump the power to fire some high-ranking officials, but he shouldn’t be able to use that power as a back door to eliminate job protections for all civil servants.The Fourth Circuit suggests that the federal law which simultaneously gave the MSPB exclusive authority over civil service disputes, while also protecting MSPB members from being fired for political reasons, must be read as a package. Congress, this argument goes, would not have agreed to shunt all civil service disputes to the MSPB if it had known that the Supreme Court would strip the MSPB of its independence. And so, if the MSPB loses its independence, it must also lose its exclusive authority over civil service disputes — and federal courts must regain the power to hear those cases.It remains to be seen whether this argument persuades a Republican Supreme Court — all three of the Fourth Circuit judges who decided the Owen case are Democrats, and two are Biden appointees. But the Fourth Circuit’s reasoning closely resembles the kind of inquiry that courts frequently engage in when a federal law is struck down.When a court declares a provision of federal law unconstitutional, it often needs to ask whether other parts of the law should fall along with the unconstitutional provision, an inquiry known as “severability.” Often, this severability analysis asks which hypothetical law Congress would have enacted if it had known that the one provision is invalid.The Fourth Circuit’s decision in Owen is essentially a severability opinion. It takes as a given the Supreme Court’s conclusion that laws protecting Dellinger and the MSPB members from being fired are unconstitutional, then asks which law Congress would have enacted if it had known that it could not protect MSPB members from political reprisal. The Fourth Circuit’s conclusion is that, if Congress had known that MSPB members cannot be politically independent, then it would not have given them exclusive authority over civil service disputes.If the Supreme Court permits Trump to neutralize the MSPB, that would fundamentally change how the government functionsThe idea that civil servants should be hired based on merit and insulated from political pressure is hardly new. The first law protecting civil servants, the Pendleton Civil Service Reform Act, which President Chester A. Arthur signed into law in 1883.Laws like the Pendleton Act do more than protect civil servants who, say, resist pressure to deny government services to the president’s enemies. They also make it possible for top government officials to actually do their jobs.Before the Pendleton Act, federal jobs were typically awarded as patronage — so when a Democratic administration took office, the Republicans who occupied most federal jobs would be fired and replaced by Democrats. This was obviously quite disruptive, and it made it difficult for the government to hire highly specialized workers. Why would someone go to the trouble of earning an economics degree and becoming an expert on federal monetary policy, if they knew that their job in the Treasury Department would disappear the minute their party lost an election?Meanwhile, the task of filling all of these patronage jobs overwhelmed new presidents. As Candice Millard wrote in a 2011 biography of President James A. Garfield, the last president elected before the Pendleton Act, when Garfield took office, a line of job seekers began to form outside the White House “before he even sat down to breakfast.” By the time Garfield had eaten, this line “snaked down the front walk, out the gate, and onto Pennsylvania Avenue.” Garfield was assassinated by a disgruntled job seeker, a fact that likely helped build political support for the Pendleton Act.By neutralizing the MSPB, Trump is effectively undoing nearly 150 years worth of civil service reforms, and returning the federal government to a much more primitive state. At the very least, the Fourth Circuit’s decision in Owen is likely to force the Supreme Court to ask if it really wants a century and a half of work to unravel.See More: #federal #courts #novel #proposal #rein
    WWW.VOX.COM
    A federal court’s novel proposal to rein in Trump’s power grab
    Limited-time offer: Get more than 30% off a Vox Membership. Join today to support independent journalism. Federal civil servants are supposed to enjoy robust protections against being fired or demoted for political reasons. But President Donald Trump has effectively stripped them of these protections by neutralizing the federal agencies that implement these safeguards.An agency known as the Merit Systems Protection Board (MSPB) hears civil servants’ claims that a “government employer discriminated against them, retaliated against them for whistleblowing, violated protections for veterans, or otherwise subjected them to an unlawful adverse employment action or prohibited personnel practice,” as a federal appeals court explained in an opinion on Tuesday. But the three-member board currently lacks the quorum it needs to operate because Trump fired two of the members.Trump also fired Hampton Dellinger, who until recently served as the special counsel of the United States, a role that investigates alleged violations of federal civil service protections and brings related cases to the MSPB. Trump recently nominated Paul Ingrassia, a far-right podcaster and recent law school graduate to replace Dellinger.The upshot of these firings is that no one in the government is able to enforce laws and regulations protecting civil servants. As Dellinger noted in an interview, the morning before a federal appeals court determined that Trump could fire him, he’d “been able to get 6,000 newly hired federal employees back on the job,” and was working to get “all probationary employees put back on the job [after] their unlawful firing” by the Department of Government Efficiency and other Trump administration efforts to cull the federal workforce. These and other efforts to reinstate illegally fired federal workers are on hold, and may not resume until Trump leaves office.Which brings us to the US Court of Appeals for the Fourth Circuit’s decision in National Association of Immigration Judges v. Owen, which proposes an innovative solution to this problem.As the Owen opinion notes, the Supreme Court has held that the MSPB process is the only process a federal worker can use if they believe they’ve been fired in violation of federal civil service laws. So if that process is shut down, the worker is out of luck.But the Fourth Circuit’s Owen opinion argues that this “conclusion can only be true…when the statute functions as Congress intended.” That is, if the MSPB and the special counsel are unable to “fulfill their roles prescribed by” federal law, then the courts should pick up the slack and start hearing cases brought by illegally fired civil servants.For procedural reasons, the Fourth Circuit’s decision will not take effect right away — the court sent the case back down to a trial judge to “conduct a factual inquiry” into whether the MSPB continues to function. And, even after that inquiry is complete, the Trump administration is likely to appeal the Fourth Circuit’s decision to the Supreme Court if it wants to keep civil service protections on ice.If the justices agree with the circuit court, however, that will close a legal loophole that has left federal civil servants unprotected by laws that are still very much on the books. And it will cure a problem that the Supreme Court bears much of the blame for creating.The “unitary executive,” or why the Supreme Court is to blame for the loss of civil service protectionsFederal law provides that Dellinger could “be removed by the President only for inefficiency, neglect of duty, or malfeasance in office,” and members of the MSPB enjoy similar protections against being fired. Trump’s decision to fire these officials was illegal under these laws.But a federal appeals court nonetheless permitted Trump to fire Dellinger, and the Supreme Court recently backed Trump’s decision to fire the MSPB members as well. The reason is a legal theory known as the “unitary executive,” which is popular among Republican legal scholars, and especially among the six Republicans that control the Supreme Court.If you want to know all the details of this theory, I can point you to three different explainers I’ve written on the unitary executive. The short explanation is that the unitary executive theory claims that the president must have the power to fire top political appointees charged with executing federal laws – including officials who execute laws protecting civil servants from illegal firings.But the Supreme Court has never claimed that the unitary executive permits the president to fire any federal worker regardless of whether Congress has protected them or not. In a seminal opinion laying out the unitary executive theory, for example, Justice Antonin Scalia argued that the president must have the power to remove “principal officers” — high-ranking officials like Dellinger who must be nominated by the president and confirmed by the Senate. Under Scalia’s approach, lower-ranking government workers may still be given some protection.The Fourth Circuit cannot override the Supreme Court’s decision to embrace the unitary executive theory. But the Owen opinion essentially tries to police the line drawn by Scalia. The Supreme Court has given Trump the power to fire some high-ranking officials, but he shouldn’t be able to use that power as a back door to eliminate job protections for all civil servants.The Fourth Circuit suggests that the federal law which simultaneously gave the MSPB exclusive authority over civil service disputes, while also protecting MSPB members from being fired for political reasons, must be read as a package. Congress, this argument goes, would not have agreed to shunt all civil service disputes to the MSPB if it had known that the Supreme Court would strip the MSPB of its independence. And so, if the MSPB loses its independence, it must also lose its exclusive authority over civil service disputes — and federal courts must regain the power to hear those cases.It remains to be seen whether this argument persuades a Republican Supreme Court — all three of the Fourth Circuit judges who decided the Owen case are Democrats, and two are Biden appointees. But the Fourth Circuit’s reasoning closely resembles the kind of inquiry that courts frequently engage in when a federal law is struck down.When a court declares a provision of federal law unconstitutional, it often needs to ask whether other parts of the law should fall along with the unconstitutional provision, an inquiry known as “severability.” Often, this severability analysis asks which hypothetical law Congress would have enacted if it had known that the one provision is invalid.The Fourth Circuit’s decision in Owen is essentially a severability opinion. It takes as a given the Supreme Court’s conclusion that laws protecting Dellinger and the MSPB members from being fired are unconstitutional, then asks which law Congress would have enacted if it had known that it could not protect MSPB members from political reprisal. The Fourth Circuit’s conclusion is that, if Congress had known that MSPB members cannot be politically independent, then it would not have given them exclusive authority over civil service disputes.If the Supreme Court permits Trump to neutralize the MSPB, that would fundamentally change how the government functionsThe idea that civil servants should be hired based on merit and insulated from political pressure is hardly new. The first law protecting civil servants, the Pendleton Civil Service Reform Act, which President Chester A. Arthur signed into law in 1883.Laws like the Pendleton Act do more than protect civil servants who, say, resist pressure to deny government services to the president’s enemies. They also make it possible for top government officials to actually do their jobs.Before the Pendleton Act, federal jobs were typically awarded as patronage — so when a Democratic administration took office, the Republicans who occupied most federal jobs would be fired and replaced by Democrats. This was obviously quite disruptive, and it made it difficult for the government to hire highly specialized workers. Why would someone go to the trouble of earning an economics degree and becoming an expert on federal monetary policy, if they knew that their job in the Treasury Department would disappear the minute their party lost an election?Meanwhile, the task of filling all of these patronage jobs overwhelmed new presidents. As Candice Millard wrote in a 2011 biography of President James A. Garfield, the last president elected before the Pendleton Act, when Garfield took office, a line of job seekers began to form outside the White House “before he even sat down to breakfast.” By the time Garfield had eaten, this line “snaked down the front walk, out the gate, and onto Pennsylvania Avenue.” Garfield was assassinated by a disgruntled job seeker, a fact that likely helped build political support for the Pendleton Act.By neutralizing the MSPB, Trump is effectively undoing nearly 150 years worth of civil service reforms, and returning the federal government to a much more primitive state. At the very least, the Fourth Circuit’s decision in Owen is likely to force the Supreme Court to ask if it really wants a century and a half of work to unravel.See More:
    Like
    Love
    Wow
    Sad
    Angry
    286
    0 Комментарии 0 Поделились
  • Managers rethink ecological scenarios as threats rise amid climate change

    In Sequoia and Kings Canyon National Parks in California, trees that have persisted through rain and shine for thousands of years are now facing multiple threats triggered by a changing climate.

    Scientists and park managers once thought giant sequoia forests were nearly impervious to stressors like wildfire, drought and pests. Yet, even very large trees are proving vulnerable, particularly when those stressors are amplified by rising temperatures and increasing weather extremes.

    The rapid pace of climate change—combined with threats like the spread of invasive species and diseases—can affect ecosystems in ways that defy expectations based on past experiences. As a result, Western forests are transitioning to grasslands or shrublands after unprecedented wildfires. Woody plants are expanding into coastal wetlands. Coral reefs are being lost entirely.

    To protect these places, which are valued for their natural beauty and the benefits they provide for recreation, clean water and wildlife, forest and land managers increasingly must anticipate risks they have never seen before. And they must prepare for what those risks will mean for stewardship as ecosystems rapidly transform.

    As ecologists and a climate scientist, we’re helping them figure out how to do that.

    Managing changing ecosystems

    Traditional management approaches focus on maintaining or restoring how ecosystems looked and functioned historically.

    However, that doesn’t always work when ecosystems are subjected to new and rapidly shifting conditions.

    Ecosystems have many moving parts—plants, animals, fungi, and microbes; and the soil, air and water in which they live—that interact with one another in complex ways.

    When the climate changes, it’s like shifting the ground on which everything rests. The results can undermine the integrity of the system, leading to ecological changes that are hard to predict.

    To plan for an uncertain future, natural resource managers need to consider many different ways changes in climate and ecosystems could affect their landscapes. Essentially, what scenarios are possible?

    Preparing for multiple possibilities

    At Sequoia and Kings Canyon, park managers were aware that climate change posed some big risks to the iconic trees under their care. More than a decade ago, they undertook a major effort to explore different scenarios that could play out in the future.

    It’s a good thing they did, because some of the more extreme possibilities they imagined happened sooner than expected.

    In 2014, drought in California caused the giant sequoias’ foliage to die back, something never documented before. In 2017, sequoia trees began dying from insect damage. And, in 2020 and 2021, fires burned through sequoia groves, killing thousands of ancient trees.

    While these extreme events came as a surprise to many people, thinking through the possibilities ahead of time meant the park managers had already begun to take steps that proved beneficial. One example was prioritizing prescribed burns to remove undergrowth that could fuel hotter, more destructive fires.

    The key to effective planning is a thoughtful consideration of a suite of strategies that are likely to succeed in the face of many different changes in climates and ecosystems. That involves thinking through wide-ranging potential outcomes to see how different strategies might fare under each scenario—including preparing for catastrophic possibilities, even those considered unlikely.

    For example, prescribed burning may reduce risks from both catastrophic wildfire and drought by reducing the density of plant growth, whereas suppressing all fires could increase those risks in the long run.

    Strategies undertaken today have consequences for decades to come. Managers need to have confidence that they are making good investments when they put limited resources toward actions like forest thinning, invasive species control, buying seeds or replanting trees. Scenarios can help inform those investment choices.

    Constructing credible scenarios of ecological change to inform this type of planning requires considering the most important unknowns. Scenarios look not only at how the climate could change, but also how complex ecosystems could react and what surprises might lay beyond the horizon.

    Scientists at the North Central Climate Adaptation Science Center are collaborating with managers in the Nebraska Sandhills to develop scenarios of future ecological change under different climate conditions, disturbance events like fires and extreme droughts, and land uses like grazing. Key ingredients for crafting ecological scenarios

    To provide some guidance to people tasked with managing these landscapes, we brought together a group of experts in ecology, climate science, and natural resource management from across universities and government agencies.

    We identified three key ingredients for constructing credible ecological scenarios:

    1. Embracing ecological uncertainty: Instead of banking on one “most likely” outcome for ecosystems in a changing climate, managers can better prepare by mapping out multiple possibilities. In Nebraska’s Sandhills, we are exploring how this mostly intact native prairie could transform, with outcomes as divergent as woodlands and open dunes.

    2. Thinking in trajectories: It’s helpful to consider not just the outcomes, but also the potential pathways for getting there. Will ecological changes unfold gradually or all at once? By envisioning different pathways through which ecosystems might respond to climate change and other stressors, natural resource managers can identify critical moments where specific actions, such as removing tree seedlings encroaching into grasslands, can steer ecosystems toward a more desirable future.

    3. Preparing for surprises: Planning for rare disasters or sudden species collapses helps managers respond nimbly when the unexpected strikes, such as a severe drought leading to widespread erosion. Being prepared for abrupt changes and having contingency plans can mean the difference between quickly helping an ecosystem recover and losing it entirely.

    Over the past decade, access to climate model projections through easy-to-use websites has revolutionized resource managers’ ability to explore different scenarios of how the local climate might change.

    What managers are missing today is similar access to ecological model projections and tools that can help them anticipate possible changes in ecosystems. To bridge this gap, we believe the scientific community should prioritize developing ecological projections and decision-support tools that can empower managers to plan for ecological uncertainty with greater confidence and foresight.

    Ecological scenarios don’t eliminate uncertainty, but they can help to navigate it more effectively by identifying strategic actions to manage forests and other ecosystems.

    Kyra Clark-Wolf is a research scientist in ecological transformation at the University of Colorado Boulder.

    Brian W. Miller is a research ecologist at the U.S. Geological Survey.

    Imtiaz Rangwala is a research scientist in climate at the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #managers #rethink #ecological #scenarios #threats
    Managers rethink ecological scenarios as threats rise amid climate change
    In Sequoia and Kings Canyon National Parks in California, trees that have persisted through rain and shine for thousands of years are now facing multiple threats triggered by a changing climate. Scientists and park managers once thought giant sequoia forests were nearly impervious to stressors like wildfire, drought and pests. Yet, even very large trees are proving vulnerable, particularly when those stressors are amplified by rising temperatures and increasing weather extremes. The rapid pace of climate change—combined with threats like the spread of invasive species and diseases—can affect ecosystems in ways that defy expectations based on past experiences. As a result, Western forests are transitioning to grasslands or shrublands after unprecedented wildfires. Woody plants are expanding into coastal wetlands. Coral reefs are being lost entirely. To protect these places, which are valued for their natural beauty and the benefits they provide for recreation, clean water and wildlife, forest and land managers increasingly must anticipate risks they have never seen before. And they must prepare for what those risks will mean for stewardship as ecosystems rapidly transform. As ecologists and a climate scientist, we’re helping them figure out how to do that. Managing changing ecosystems Traditional management approaches focus on maintaining or restoring how ecosystems looked and functioned historically. However, that doesn’t always work when ecosystems are subjected to new and rapidly shifting conditions. Ecosystems have many moving parts—plants, animals, fungi, and microbes; and the soil, air and water in which they live—that interact with one another in complex ways. When the climate changes, it’s like shifting the ground on which everything rests. The results can undermine the integrity of the system, leading to ecological changes that are hard to predict. To plan for an uncertain future, natural resource managers need to consider many different ways changes in climate and ecosystems could affect their landscapes. Essentially, what scenarios are possible? Preparing for multiple possibilities At Sequoia and Kings Canyon, park managers were aware that climate change posed some big risks to the iconic trees under their care. More than a decade ago, they undertook a major effort to explore different scenarios that could play out in the future. It’s a good thing they did, because some of the more extreme possibilities they imagined happened sooner than expected. In 2014, drought in California caused the giant sequoias’ foliage to die back, something never documented before. In 2017, sequoia trees began dying from insect damage. And, in 2020 and 2021, fires burned through sequoia groves, killing thousands of ancient trees. While these extreme events came as a surprise to many people, thinking through the possibilities ahead of time meant the park managers had already begun to take steps that proved beneficial. One example was prioritizing prescribed burns to remove undergrowth that could fuel hotter, more destructive fires. The key to effective planning is a thoughtful consideration of a suite of strategies that are likely to succeed in the face of many different changes in climates and ecosystems. That involves thinking through wide-ranging potential outcomes to see how different strategies might fare under each scenario—including preparing for catastrophic possibilities, even those considered unlikely. For example, prescribed burning may reduce risks from both catastrophic wildfire and drought by reducing the density of plant growth, whereas suppressing all fires could increase those risks in the long run. Strategies undertaken today have consequences for decades to come. Managers need to have confidence that they are making good investments when they put limited resources toward actions like forest thinning, invasive species control, buying seeds or replanting trees. Scenarios can help inform those investment choices. Constructing credible scenarios of ecological change to inform this type of planning requires considering the most important unknowns. Scenarios look not only at how the climate could change, but also how complex ecosystems could react and what surprises might lay beyond the horizon. Scientists at the North Central Climate Adaptation Science Center are collaborating with managers in the Nebraska Sandhills to develop scenarios of future ecological change under different climate conditions, disturbance events like fires and extreme droughts, and land uses like grazing. Key ingredients for crafting ecological scenarios To provide some guidance to people tasked with managing these landscapes, we brought together a group of experts in ecology, climate science, and natural resource management from across universities and government agencies. We identified three key ingredients for constructing credible ecological scenarios: 1. Embracing ecological uncertainty: Instead of banking on one “most likely” outcome for ecosystems in a changing climate, managers can better prepare by mapping out multiple possibilities. In Nebraska’s Sandhills, we are exploring how this mostly intact native prairie could transform, with outcomes as divergent as woodlands and open dunes. 2. Thinking in trajectories: It’s helpful to consider not just the outcomes, but also the potential pathways for getting there. Will ecological changes unfold gradually or all at once? By envisioning different pathways through which ecosystems might respond to climate change and other stressors, natural resource managers can identify critical moments where specific actions, such as removing tree seedlings encroaching into grasslands, can steer ecosystems toward a more desirable future. 3. Preparing for surprises: Planning for rare disasters or sudden species collapses helps managers respond nimbly when the unexpected strikes, such as a severe drought leading to widespread erosion. Being prepared for abrupt changes and having contingency plans can mean the difference between quickly helping an ecosystem recover and losing it entirely. Over the past decade, access to climate model projections through easy-to-use websites has revolutionized resource managers’ ability to explore different scenarios of how the local climate might change. What managers are missing today is similar access to ecological model projections and tools that can help them anticipate possible changes in ecosystems. To bridge this gap, we believe the scientific community should prioritize developing ecological projections and decision-support tools that can empower managers to plan for ecological uncertainty with greater confidence and foresight. Ecological scenarios don’t eliminate uncertainty, but they can help to navigate it more effectively by identifying strategic actions to manage forests and other ecosystems. Kyra Clark-Wolf is a research scientist in ecological transformation at the University of Colorado Boulder. Brian W. Miller is a research ecologist at the U.S. Geological Survey. Imtiaz Rangwala is a research scientist in climate at the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder. This article is republished from The Conversation under a Creative Commons license. Read the original article. #managers #rethink #ecological #scenarios #threats
    WWW.FASTCOMPANY.COM
    Managers rethink ecological scenarios as threats rise amid climate change
    In Sequoia and Kings Canyon National Parks in California, trees that have persisted through rain and shine for thousands of years are now facing multiple threats triggered by a changing climate. Scientists and park managers once thought giant sequoia forests were nearly impervious to stressors like wildfire, drought and pests. Yet, even very large trees are proving vulnerable, particularly when those stressors are amplified by rising temperatures and increasing weather extremes. The rapid pace of climate change—combined with threats like the spread of invasive species and diseases—can affect ecosystems in ways that defy expectations based on past experiences. As a result, Western forests are transitioning to grasslands or shrublands after unprecedented wildfires. Woody plants are expanding into coastal wetlands. Coral reefs are being lost entirely. To protect these places, which are valued for their natural beauty and the benefits they provide for recreation, clean water and wildlife, forest and land managers increasingly must anticipate risks they have never seen before. And they must prepare for what those risks will mean for stewardship as ecosystems rapidly transform. As ecologists and a climate scientist, we’re helping them figure out how to do that. Managing changing ecosystems Traditional management approaches focus on maintaining or restoring how ecosystems looked and functioned historically. However, that doesn’t always work when ecosystems are subjected to new and rapidly shifting conditions. Ecosystems have many moving parts—plants, animals, fungi, and microbes; and the soil, air and water in which they live—that interact with one another in complex ways. When the climate changes, it’s like shifting the ground on which everything rests. The results can undermine the integrity of the system, leading to ecological changes that are hard to predict. To plan for an uncertain future, natural resource managers need to consider many different ways changes in climate and ecosystems could affect their landscapes. Essentially, what scenarios are possible? Preparing for multiple possibilities At Sequoia and Kings Canyon, park managers were aware that climate change posed some big risks to the iconic trees under their care. More than a decade ago, they undertook a major effort to explore different scenarios that could play out in the future. It’s a good thing they did, because some of the more extreme possibilities they imagined happened sooner than expected. In 2014, drought in California caused the giant sequoias’ foliage to die back, something never documented before. In 2017, sequoia trees began dying from insect damage. And, in 2020 and 2021, fires burned through sequoia groves, killing thousands of ancient trees. While these extreme events came as a surprise to many people, thinking through the possibilities ahead of time meant the park managers had already begun to take steps that proved beneficial. One example was prioritizing prescribed burns to remove undergrowth that could fuel hotter, more destructive fires. The key to effective planning is a thoughtful consideration of a suite of strategies that are likely to succeed in the face of many different changes in climates and ecosystems. That involves thinking through wide-ranging potential outcomes to see how different strategies might fare under each scenario—including preparing for catastrophic possibilities, even those considered unlikely. For example, prescribed burning may reduce risks from both catastrophic wildfire and drought by reducing the density of plant growth, whereas suppressing all fires could increase those risks in the long run. Strategies undertaken today have consequences for decades to come. Managers need to have confidence that they are making good investments when they put limited resources toward actions like forest thinning, invasive species control, buying seeds or replanting trees. Scenarios can help inform those investment choices. Constructing credible scenarios of ecological change to inform this type of planning requires considering the most important unknowns. Scenarios look not only at how the climate could change, but also how complex ecosystems could react and what surprises might lay beyond the horizon. Scientists at the North Central Climate Adaptation Science Center are collaborating with managers in the Nebraska Sandhills to develop scenarios of future ecological change under different climate conditions, disturbance events like fires and extreme droughts, and land uses like grazing. [Photos: T. Walz, M. Lavin, C. Helzer, O. Richmond, NPS (top to bottom)., CC BY] Key ingredients for crafting ecological scenarios To provide some guidance to people tasked with managing these landscapes, we brought together a group of experts in ecology, climate science, and natural resource management from across universities and government agencies. We identified three key ingredients for constructing credible ecological scenarios: 1. Embracing ecological uncertainty: Instead of banking on one “most likely” outcome for ecosystems in a changing climate, managers can better prepare by mapping out multiple possibilities. In Nebraska’s Sandhills, we are exploring how this mostly intact native prairie could transform, with outcomes as divergent as woodlands and open dunes. 2. Thinking in trajectories: It’s helpful to consider not just the outcomes, but also the potential pathways for getting there. Will ecological changes unfold gradually or all at once? By envisioning different pathways through which ecosystems might respond to climate change and other stressors, natural resource managers can identify critical moments where specific actions, such as removing tree seedlings encroaching into grasslands, can steer ecosystems toward a more desirable future. 3. Preparing for surprises: Planning for rare disasters or sudden species collapses helps managers respond nimbly when the unexpected strikes, such as a severe drought leading to widespread erosion. Being prepared for abrupt changes and having contingency plans can mean the difference between quickly helping an ecosystem recover and losing it entirely. Over the past decade, access to climate model projections through easy-to-use websites has revolutionized resource managers’ ability to explore different scenarios of how the local climate might change. What managers are missing today is similar access to ecological model projections and tools that can help them anticipate possible changes in ecosystems. To bridge this gap, we believe the scientific community should prioritize developing ecological projections and decision-support tools that can empower managers to plan for ecological uncertainty with greater confidence and foresight. Ecological scenarios don’t eliminate uncertainty, but they can help to navigate it more effectively by identifying strategic actions to manage forests and other ecosystems. Kyra Clark-Wolf is a research scientist in ecological transformation at the University of Colorado Boulder. Brian W. Miller is a research ecologist at the U.S. Geological Survey. Imtiaz Rangwala is a research scientist in climate at the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Комментарии 0 Поделились
  • SpaceX Is Reportedly Giving Elon Musk Advance Warning of Drug Tests

    Image by Jim Watson / AFP via Getty / FuturismRx/MedicinesGenerally speaking, drug testing in the workplace is supposed to be conductd at random intervals — but according to insider sources, that's not the case for the sometimes-world's richest man.A New York Times exposé about Elon Musk's fear and loathing on the campaign trail found that the billionaire not only has been on boatloads of risky and illegal drugs during his turn into hard-right politics, but was also being tipped off about when he'd be tested for them.As we've long known, SpaceX's federal contractor status requires that all its employees — including its mercurial CEO — pass drug tests. Given Musk's admitted penchant for mind-altering substances, and for ketamine in particular, his ability to pass those tests has long been a concern.If the NYT's sources are to be believed, we may now know how the 53-year-old keeps passing: because he's been warned in advance when the "random" tests are going to occur, and been able to plan accordingly.As those same sources allege, Musk's substance use increased significantly as he helped propel Donald Trump to the White House for a second time. He purportedly told people that his bladder had been affected by his frequent ketamine use, and had been taking ecstasy and psilocybin mushrooms too.The multi-hyphenate businessman and politico also carried around a daily medication box with at least 20 pills in it — including ones with markings that resemble the ADHD drug Adderall, according to people who saw photos of it and regaled it back to the NYT. When it comes to stimulants like Adderall and anything else in Musk's daily pill box — which, despite how the article makes it sound, is not that abnormal a thing for a man in his 50s to be carrying around — there's a good chance that the billionaire has prescriptions that could excuse at least some abuse. He also has claimed that he was prescribed ketamine for depression, though to be fair, taking so much that it makes it hard to pee would suggest he's far surpassed his recommended dosage.As Futurism has noted before, Musk's drugs of choice described here are not often screened for on standard drug panels. Though we don't know how in-depth federal drug tests are, standard tests primarily screen for cocaine, cannabis, amphetamines, opiates, and PCP, though some include ecstasy/MDMA as well. Testing for ketamine is, on the other hand, pretty rare.If Musk is being tipped off about his drug tests — and is either flushing his system or taking a sober underling's urine or hair — none of that would matter. But given that the worst of his purported substance abuse revolves around ketamine, there's always a chance that he's in a recurring K-hole and getting off scot-free, unlike his employees, who are held to a much higher standard.More on Musk's drug use: Ex-FBI Agent: Elon Musk's Drug Habit Made Him an Easy Target for Russian SpiesShare This Article
    #spacex #reportedly #giving #elon #musk
    SpaceX Is Reportedly Giving Elon Musk Advance Warning of Drug Tests
    Image by Jim Watson / AFP via Getty / FuturismRx/MedicinesGenerally speaking, drug testing in the workplace is supposed to be conductd at random intervals — but according to insider sources, that's not the case for the sometimes-world's richest man.A New York Times exposé about Elon Musk's fear and loathing on the campaign trail found that the billionaire not only has been on boatloads of risky and illegal drugs during his turn into hard-right politics, but was also being tipped off about when he'd be tested for them.As we've long known, SpaceX's federal contractor status requires that all its employees — including its mercurial CEO — pass drug tests. Given Musk's admitted penchant for mind-altering substances, and for ketamine in particular, his ability to pass those tests has long been a concern.If the NYT's sources are to be believed, we may now know how the 53-year-old keeps passing: because he's been warned in advance when the "random" tests are going to occur, and been able to plan accordingly.As those same sources allege, Musk's substance use increased significantly as he helped propel Donald Trump to the White House for a second time. He purportedly told people that his bladder had been affected by his frequent ketamine use, and had been taking ecstasy and psilocybin mushrooms too.The multi-hyphenate businessman and politico also carried around a daily medication box with at least 20 pills in it — including ones with markings that resemble the ADHD drug Adderall, according to people who saw photos of it and regaled it back to the NYT. When it comes to stimulants like Adderall and anything else in Musk's daily pill box — which, despite how the article makes it sound, is not that abnormal a thing for a man in his 50s to be carrying around — there's a good chance that the billionaire has prescriptions that could excuse at least some abuse. He also has claimed that he was prescribed ketamine for depression, though to be fair, taking so much that it makes it hard to pee would suggest he's far surpassed his recommended dosage.As Futurism has noted before, Musk's drugs of choice described here are not often screened for on standard drug panels. Though we don't know how in-depth federal drug tests are, standard tests primarily screen for cocaine, cannabis, amphetamines, opiates, and PCP, though some include ecstasy/MDMA as well. Testing for ketamine is, on the other hand, pretty rare.If Musk is being tipped off about his drug tests — and is either flushing his system or taking a sober underling's urine or hair — none of that would matter. But given that the worst of his purported substance abuse revolves around ketamine, there's always a chance that he's in a recurring K-hole and getting off scot-free, unlike his employees, who are held to a much higher standard.More on Musk's drug use: Ex-FBI Agent: Elon Musk's Drug Habit Made Him an Easy Target for Russian SpiesShare This Article #spacex #reportedly #giving #elon #musk
    FUTURISM.COM
    SpaceX Is Reportedly Giving Elon Musk Advance Warning of Drug Tests
    Image by Jim Watson / AFP via Getty / FuturismRx/MedicinesGenerally speaking, drug testing in the workplace is supposed to be conductd at random intervals — but according to insider sources, that's not the case for the sometimes-world's richest man.A New York Times exposé about Elon Musk's fear and loathing on the campaign trail found that the billionaire not only has been on boatloads of risky and illegal drugs during his turn into hard-right politics, but was also being tipped off about when he'd be tested for them.As we've long known, SpaceX's federal contractor status requires that all its employees — including its mercurial CEO — pass drug tests. Given Musk's admitted penchant for mind-altering substances, and for ketamine in particular, his ability to pass those tests has long been a concern.If the NYT's sources are to be believed, we may now know how the 53-year-old keeps passing: because he's been warned in advance when the "random" tests are going to occur, and been able to plan accordingly.(Though those sources didn't get into it, anyone who's ever had to pass a drug test themselves knows that there are typicaly two options: drink so much water that you pee all the drugs out of your system, or get urine or hair from someone else and pass it off as your own.)As those same sources allege, Musk's substance use increased significantly as he helped propel Donald Trump to the White House for a second time. He purportedly told people that his bladder had been affected by his frequent ketamine use, and had been taking ecstasy and psilocybin mushrooms too.The multi-hyphenate businessman and politico also carried around a daily medication box with at least 20 pills in it — including ones with markings that resemble the ADHD drug Adderall, according to people who saw photos of it and regaled it back to the NYT. (He's also been linked to cocaine and a cornucopia of other substances.)When it comes to stimulants like Adderall and anything else in Musk's daily pill box — which, despite how the article makes it sound, is not that abnormal a thing for a man in his 50s to be carrying around — there's a good chance that the billionaire has prescriptions that could excuse at least some abuse. He also has claimed that he was prescribed ketamine for depression, though to be fair, taking so much that it makes it hard to pee would suggest he's far surpassed his recommended dosage.As Futurism has noted before, Musk's drugs of choice described here are not often screened for on standard drug panels. Though we don't know how in-depth federal drug tests are, standard tests primarily screen for cocaine, cannabis, amphetamines, opiates, and PCP, though some include ecstasy/MDMA as well. Testing for ketamine is, on the other hand, pretty rare.If Musk is being tipped off about his drug tests — and is either flushing his system or taking a sober underling's urine or hair — none of that would matter. But given that the worst of his purported substance abuse revolves around ketamine, there's always a chance that he's in a recurring K-hole and getting off scot-free, unlike his employees, who are held to a much higher standard.More on Musk's drug use: Ex-FBI Agent: Elon Musk's Drug Habit Made Him an Easy Target for Russian SpiesShare This Article
    0 Комментарии 0 Поделились
  • What AI’s impact on individuals means for the health workforce and industry

    Transcript    
    PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.”      
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.     The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak.
    You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues.
    So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.  
    To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar.
    Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence.
    Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics.
    Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society.Here is my interview with Ethan Mollick:
    LEE: Ethan, welcome.
    ETHAN MOLLICK: So happy to be here, thank you.
    LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine.So to get started, how and why did it happen that you’ve become one of the leading experts on AI?
    MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I wasmy PhD at MIT, I worked with Marvin Minskyand the MITMedia Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it.
    And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field.
    And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start.So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question.
    LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been?
    MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now.
    One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things.
    And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever.
    So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology.
    LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty?
    MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect.
    So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better. So I think that these are not as foreign concepts, especially as medicine continues to get more complicated.
    LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI.
    MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system.
    There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing?
    The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some wayscompared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way.
    The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind.
    LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention.
    MOLLICK: Yes.
    LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot, the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point?
    MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right.
    I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?”
    So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right.
    LEE: Yes. Mm-hmm.
    MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either.
    LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever?
    MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them.And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered.
    You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete.
    What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one.
    Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet.
    LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this. 
    MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills.
    Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely.
    But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety.
    LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company.
    And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs?
    MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right.
    So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains.
    And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result.
    LEE: You know, where are those productivity gains going, then, when you get to the organizational level?
    MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right.
    Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal.
    At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen.
    So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons.
    And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves.
    So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change.
    LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI?
    MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again.
    What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field.
    So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab.
    So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how toAI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill?
    And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves.
    LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones.
    And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ?
    MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish.
    I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space.
    But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things?
    And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to.
    So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that.
    LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching?
    MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful.
    A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing.
    So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right.
    I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear.
    But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition.
    LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.”MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize.LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading?
    MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems.
    So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is.
    But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI, and it had a good overview …
    LEE: Yeah, that’s a great one.
    MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I thinkKarpathyhas some really nice videos of use that I would recommend.
    Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works.
    LEE: Yeah.
    MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right.
    LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here.
    Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCMEaccrediting body, what’s the one thing you would want them to really internalize?
    MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which, “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine.
    I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast.
    So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right.
    We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here.
    LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer.
    MOLLICK: Yes. Yes.
    LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall.
    But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea?
    MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.”Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right.
    There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right.
    LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens?
    MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people.
    So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine.
    But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point.
    Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not.
    Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get?
    LEE: Yeah.
    MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything.
    Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right.
    And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it.
    LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining.
    MOLLICK: Thank you.  
    I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work.
    One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does.
    In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate. Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI.
    The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI.
    Here’s now my interview with Azeem Azhar:
    LEE: Azeem, welcome.
    AZEEM AZHAR: Peter, thank you so much for having me. 
    LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before.
    And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day?
    AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip …
    LEE: Oh wow.
    AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started.
    And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large.
    LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through?
    AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed.
    Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th.
    And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold.
    LEE: And who’s the we that you were experimenting with?
    AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar,or they walk into our virtual team room, and we try to solve problems.
    LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.  
    And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine?
    AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that ismore broad than that.
    So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right?They’re on the tablet computers, and they’re scribing away.
    And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back, right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload.
    And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help.
    So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced.
    So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized.
    LEE: Yeah.
    AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura.
    LEE: Yup.
    AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to.
    And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health, which, I mean, does physicals, MRI scans, and blood tests, and so on.
    It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector.
    And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks.But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout.
    So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems
    LEE: I love how you break that down. And I want to press on a couple of things.
    You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated?
    AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example.
    In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different.
    I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away.
    LEE: Yeah.
    AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week.
    LEE: Right. Yeah.
    AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer.
    LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution.
    Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons.
    And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work?
    AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice.
    I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors.
    I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner.
    LEE: Yeah.
    AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly.
    LEE: Right.
    AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful.
    LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis.
    And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem?
    AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz aroundthe hospital faster and faster than ever before.
    We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right?
    LEE: Yeah, yeah.
    AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system.
    So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later …
    LEE: Right.
    AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for.
    And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system …
    LEE: Yup.
    AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that.
    So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible.
    And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Carewas one, and Narayana Hrudayalayawas another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons.
    LEE: Yeah, yep.
    AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own.
    LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop?
    AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold.
    If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system.
    LEE: Right. Yep. Yep.
    AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time.
    LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you.
    AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician.
    In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart
    I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows.Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that.
    LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time.AZHAR: Yeah, yeah. Thank god for Clippy. Yes.
    LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself, about regulation. And we made some minor commentary on that.
    And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway.
    AZHAR: Right.
    LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like?
    AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs, and there are the classic set of processes you go through.
    You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience.
    So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly.
    So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots.
    LEE: Yes.
    AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval.
    I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be.LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth?
    AZHAR: Right.
    LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow.
    AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week.
    And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician.
    LEE: Yeah.
    AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right.LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah.
    AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers …
    LEE: Yes.
    AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next.
    LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this.
    And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions?
    AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in.
    LEE: OK.AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches.
    And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time.
    LEE: Yes.
    AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety.
    And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines.
    I mean, when you think about being an athlete, which is something I think about, but I could never ever do,but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health.
    LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said.
    Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much.
    AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you.  
    I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies.
    In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.  
    Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear.
    Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level.
    Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build, which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference.
    But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing.A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in.
    Until next time.
    #what #ais #impact #individuals #means
    What AI’s impact on individuals means for the health workforce and industry
    Transcript     PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.”       This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.     The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak. You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues. So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.   To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar. Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence. Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics. Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society.Here is my interview with Ethan Mollick: LEE: Ethan, welcome. ETHAN MOLLICK: So happy to be here, thank you. LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine.So to get started, how and why did it happen that you’ve become one of the leading experts on AI? MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I wasmy PhD at MIT, I worked with Marvin Minskyand the MITMedia Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it. And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field. And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start.So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question. LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been? MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now. One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things. And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever. So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology. LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty? MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect. So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better. So I think that these are not as foreign concepts, especially as medicine continues to get more complicated. LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI. MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system. There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing? The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some wayscompared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way. The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind. LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention. MOLLICK: Yes. LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot, the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point? MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right. I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?” So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right. LEE: Yes. Mm-hmm. MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either. LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever? MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them.And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered. You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete. What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one. Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet. LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this.  MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills. Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely. But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety. LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company. And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs? MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right. So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains. And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result. LEE: You know, where are those productivity gains going, then, when you get to the organizational level? MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right. Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal. At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen. So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons. And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves. So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change. LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI? MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again. What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field. So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab. So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how toAI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill? And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves. LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones. And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ? MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish. I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space. But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things? And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to. So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that. LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching? MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful. A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing. So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right. I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear. But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition. LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.”MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize.LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading? MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems. So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is. But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI, and it had a good overview … LEE: Yeah, that’s a great one. MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I thinkKarpathyhas some really nice videos of use that I would recommend. Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works. LEE: Yeah. MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right. LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here. Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCMEaccrediting body, what’s the one thing you would want them to really internalize? MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which, “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine. I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast. So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right. We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here. LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer. MOLLICK: Yes. Yes. LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall. But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea? MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.”Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right. There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right. LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens? MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people. So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine. But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point. Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not. Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get? LEE: Yeah. MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything. Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right. And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it. LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining. MOLLICK: Thank you.   I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work. One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does. In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate. Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI. The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI. Here’s now my interview with Azeem Azhar: LEE: Azeem, welcome. AZEEM AZHAR: Peter, thank you so much for having me.  LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before. And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day? AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip … LEE: Oh wow. AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started. And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large. LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through? AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed. Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th. And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold. LEE: And who’s the we that you were experimenting with? AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar,or they walk into our virtual team room, and we try to solve problems. LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.   And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine? AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that ismore broad than that. So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right?They’re on the tablet computers, and they’re scribing away. And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back, right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload. And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help. So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced. So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized. LEE: Yeah. AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura. LEE: Yup. AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to. And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health, which, I mean, does physicals, MRI scans, and blood tests, and so on. It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector. And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks.But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout. So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems LEE: I love how you break that down. And I want to press on a couple of things. You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated? AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example. In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different. I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away. LEE: Yeah. AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week. LEE: Right. Yeah. AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer. LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution. Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons. And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work? AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice. I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors. I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner. LEE: Yeah. AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly. LEE: Right. AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful. LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis. And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem? AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz aroundthe hospital faster and faster than ever before. We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right? LEE: Yeah, yeah. AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system. So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later … LEE: Right. AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for. And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system … LEE: Yup. AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that. So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible. And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Carewas one, and Narayana Hrudayalayawas another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons. LEE: Yeah, yep. AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own. LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop? AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold. If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system. LEE: Right. Yep. Yep. AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time. LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you. AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician. In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows.Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that. LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time.AZHAR: Yeah, yeah. Thank god for Clippy. Yes. LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself, about regulation. And we made some minor commentary on that. And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway. AZHAR: Right. LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like? AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs, and there are the classic set of processes you go through. You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience. So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly. So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots. LEE: Yes. AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval. I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be.LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth? AZHAR: Right. LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow. AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week. And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician. LEE: Yeah. AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right.LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah. AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers … LEE: Yes. AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next. LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this. And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions? AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in. LEE: OK.AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches. And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time. LEE: Yes. AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety. And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines. I mean, when you think about being an athlete, which is something I think about, but I could never ever do,but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health. LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said. Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much. AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you.   I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies. In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.   Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear. Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level. Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build, which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference. But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing.A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in. Until next time. #what #ais #impact #individuals #means
    WWW.MICROSOFT.COM
    What AI’s impact on individuals means for the health workforce and industry
    Transcript [MUSIC]    [BOOK PASSAGE]  PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.” [END OF BOOK PASSAGE]    [THEME MUSIC]    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.      [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak. You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues. So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.   To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar. Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence. Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics. Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society. [TRANSITION MUSIC] Here is my interview with Ethan Mollick: LEE: Ethan, welcome. ETHAN MOLLICK: So happy to be here, thank you. LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine. [LAUGHTER] So to get started, how and why did it happen that you’ve become one of the leading experts on AI? MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I was [getting] my PhD at MIT, I worked with Marvin Minsky (opens in new tab) and the MIT [Massachusetts Institute of Technology] Media Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it. And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field. And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start. [LAUGHTER] So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question. LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been? MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now. One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things. And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever. So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology. LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty? MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect. So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better (opens in new tab). So I think that these are not as foreign concepts, especially as medicine continues to get more complicated. LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI. MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system. There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing? The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some ways [LAUGHTER] compared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way. The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind. LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention. MOLLICK: Yes. LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot (opens in new tab), the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point? MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right. I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?” So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right. LEE: Yes. Mm-hmm. MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either. LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever? MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them. [LAUGHTER] And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered. You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete. What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one. Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet. LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this.  MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills. Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely. But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety. LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company. And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs? MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right. So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains. And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result. LEE: You know, where are those productivity gains going, then, when you get to the organizational level? MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right. Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal. At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen. So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons. And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves. So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change. LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI? MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again. What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field. So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab. So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how to [get] AI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill? And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves. LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones. And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ? MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish. I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space. But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things? And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to. So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that. LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching? MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful. A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing. So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right. I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear. But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition. LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.” [LAUGHS] MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize. [LAUGHTER] LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading? MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems. So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is. But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI (opens in new tab), and it had a good overview … LEE: Yeah, that’s a great one. MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I think [Andrej] Karpathy (opens in new tab) has some really nice videos of use that I would recommend. Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works. LEE: Yeah. MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right. LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here. Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCME [Liaison Committee on Medical Education] accrediting body, what’s the one thing you would want them to really internalize? MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which [is], “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine. I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast. So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right. We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here. LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer. MOLLICK: Yes. Yes. LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall. But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea? MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.” [LAUGHTER] Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right. There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right. LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens? MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people. So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine. But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point. Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not. Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get? LEE: Yeah. MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything. Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right. And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it. LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining. MOLLICK: Thank you. [TRANSITION MUSIC]   I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work. One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does. In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate (opens in new tab). Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI. The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI. Here’s now my interview with Azeem Azhar: LEE: Azeem, welcome. AZEEM AZHAR: Peter, thank you so much for having me.  LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before. And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day? AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip … LEE: Oh wow. AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started. And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large. LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through? AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed. Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th. And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold. LEE: And who’s the we that you were experimenting with? AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar, [LAUGHTER] or they walk into our virtual team room, and we try to solve problems. LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.   And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine? AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that is [LAUGHS] more broad than that. So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right? [LAUGHTER] They’re on the tablet computers, and they’re scribing away. And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back (opens in new tab), right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload. And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help. So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced. So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized. LEE: Yeah. AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura (opens in new tab). LEE: Yup. AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to. And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health (opens in new tab), which, I mean, does physicals, MRI scans, and blood tests, and so on. It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector. And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks. [LAUGHTER] But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout. So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems LEE: I love how you break that down. And I want to press on a couple of things. You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated? AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example. In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different. I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away. LEE: Yeah. AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week. LEE: Right. Yeah. AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer. LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution. Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons. And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work? AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice. I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors. I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner. LEE: Yeah. AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly. LEE: Right. AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful. LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis. And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem? AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz around [LAUGHTER] the hospital faster and faster than ever before. We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right? LEE: Yeah, yeah. AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system. So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later … LEE: Right. AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for. And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system … LEE: Yup. AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that. So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible. And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Care (opens in new tab) was one, and Narayana Hrudayalaya [now known as Narayana Health (opens in new tab)] was another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons. LEE: Yeah, yep. AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own. LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop? AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold. If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system. LEE: Right. Yep. Yep. AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time. LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you. AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician. In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows. [LAUGHS] Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that. LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time. [LAUGHS] AZHAR: Yeah, yeah. Thank god for Clippy. Yes. LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself [LAUGHS], about regulation. And we made some minor commentary on that. And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway. AZHAR: Right. LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like? AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs [randomized control trials], and there are the classic set of processes you go through. You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience. So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly. So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots [very rapidly]. LEE: Yes. AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval. I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be. [LAUGHTER] LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth? AZHAR: Right. LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow. AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week. And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician. LEE: Yeah. AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right. [LAUGHTER] LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah. AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers … LEE: Yes. AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM [continuous glucose monitor]. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next. LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this. And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions? AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in. LEE: OK. [LAUGHS] AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches. And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time. LEE: Yes. AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety. And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines. I mean, when you think about being an athlete, which is something I think about, but I could never ever do, [LAUGHTER] but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health. LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said. Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much. AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you. [TRANSITION MUSIC]   I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies. In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.   Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear. Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level. Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build (opens in new tab), which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference. But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing. [THEME MUSIC] A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in. Until next time. [MUSIC FADES]
    11 Комментарии 0 Поделились
  • Peloton's Guided Walk Workouts Are Great, Even If You Don't Own a Treadmill

    I never considered myself a walking girl. I never engaged in the "hot girl walk" trends on social media or went on "mental health walks" during the pandemic lockdown. In fact, I long thought walking—the milestone most of us reach as babies, the activity the majority of us do each day to accomplish all the other basic tasks of living—had a little too much PR hype, especially after learning that the much-ballyhooed "10,000 steps" we're supposed to take every day relied on an arbitrary, made-up figure for marketing pedometers. If I am going to do cardio, I reasoned, I'm going to do cardio: cycling, running, swimming, or playing sports with my friends. If I'm not sweating, what's the use? After trying out Peloton's guided walks, available in the at-home fitness giant's incredibly versatile app, I have learned the use. I am now, finally, a walking girl. Is walking good cardio?The reductive view I formerly held of cardio—that I have to be sweaty and tired for it to matter—is and was always false, which I knew, intellectually. As Lifehacker senior health editor Beth Skwarecki has explained before, walking is cardio—and it's actually a pretty good form of it, too. How fast you walk can even be used to measure your health and capacity. Different intensities of cardio do different things for your body, but at its most basic level, walking still burns calories. It's also a great, easy way to work a little extra movement into your life, especially if you're a fitness beginner or have an injury. The catalyst for me checking out Peloton's walking offerings was actually my mom being "prescribed" walking as a treatment for an issue she's been having with her back. The issue prevented her from walking long periods of time or walking fast, so after addressing it medically with doctors and physical therapists, her at-home assignment was to walk longer and longer durations on a walking pad in the living room.As an able-bodied person living in a walkable city, I have definitely taken the ability to walk for granted. I decided to check out Peloton's walking workouts to see if they'd be useful for my mom—but they ended up being useful for me. What are Peloton's walking workouts all about?To find walking workouts on the Peloton app, select Walking from the top of your home screen or type "walking" into the search bar. Peloton's walking workouts are designed for use on their Tread treadmills—but I've found that I enjoy them just as much if I go outside, although I obviously can't control the incline if I do that. The guided walks available in the app are like any class Peloton offers: They come in a variety of lengths and formats, are led by a certified instructor who encourages you and reminds you of safety cues, and feature playlists of music that keep the energy going. I start off nearly every weekday morning by walking to Dunkin' Donuts and then to the post office to drop off whatever I've sold on resale apps, so I queue up a Peloton walk for my journey. While I don't necessarily need to have an instructor in my ears reminding me to, well, walk, it encourages me to keep my pace up; I just ignore whatever they're saying about messing with incline and resistance buttons, as I'm not on a treadmill. This morning, I walked along with a five-minute warmup walk routine from instructor Logan Aldridge, who shared encouraging reminders that walking, even if it feels easy, is "massively worth it" for a person's health. He also gave speed cues using practical, real-world examples instead of just relying on cues built around treadmill functions. At one point, he described the pace goal as "not Manhattan walking, not New York City walking," which was funny because I was, in fact, Manhattan-walking my way to a Dunkin', so I slowed down a bit. You can enable location sharing for more accurate measurements and, of course, I have my Apple Watch paired with my Peloton app to give me better data on my heart rate, output, and speed, too. I forgot to enable my location tracking at the beginning of the walk, so at the end, it prompted me to enter in my distance walked for better measurements. I glanced at my watch, which told me how far I'd walked, entered in that number, and was taken to a screen where I could review my output. You can absolutely do this on a treadmill, too, and the workouts are more or less designed for you to. There are live classes available, which enter the on-demand archive when they're finished, and you can choose from cool-down walks, power walks, hikes, walks set to certain kinds of playlists, or even "walk & talk" walks that have two instructors if you like that chatty, podcast kind of feel. Some classes feature walking and running and their titles tell you that upfront. As you're scrolling the options, you'll mostly see title cards with instructors on Treads in the Peloton studio, but you'll also see a few where the instructors are outside. These guided walks are designed more for outdoor walks and the instructors will call out the half-way point so you always know when to turn around and head home. The workouts come in all kinds of lengths, from five minutes up to 75, with the longer ones often incorporating both walking and running. Why I like Peloton's walking workoutsThese workouts are an easy way to slot some extra intentional movement into my day. I'm already walking around a lot, but I'm not always doing it with purpose. Having an instructor reminding me to connect with my steps and a playlist designed to keep me on a certain pace turns a standard coffee run into a mindful exercise. Walking is also low-impact and accessible, so even on a day you're tired or even if other forms of cardio are beyond your reach, this opens up a whole world of fitness opportunities. Perhaps most importantly, this is the most accessible kind of workout on the app because you really don't need anything special. You don't need a floor mat, yoga blocks, or weights, let alone a fancy treadmill. As long as you have some good shoes, you can walk around all you want while still getting the company's signature encouragement and guidance from trained pros.
    #peloton039s #guided #walk #workouts #are
    Peloton's Guided Walk Workouts Are Great, Even If You Don't Own a Treadmill
    I never considered myself a walking girl. I never engaged in the "hot girl walk" trends on social media or went on "mental health walks" during the pandemic lockdown. In fact, I long thought walking—the milestone most of us reach as babies, the activity the majority of us do each day to accomplish all the other basic tasks of living—had a little too much PR hype, especially after learning that the much-ballyhooed "10,000 steps" we're supposed to take every day relied on an arbitrary, made-up figure for marketing pedometers. If I am going to do cardio, I reasoned, I'm going to do cardio: cycling, running, swimming, or playing sports with my friends. If I'm not sweating, what's the use? After trying out Peloton's guided walks, available in the at-home fitness giant's incredibly versatile app, I have learned the use. I am now, finally, a walking girl. Is walking good cardio?The reductive view I formerly held of cardio—that I have to be sweaty and tired for it to matter—is and was always false, which I knew, intellectually. As Lifehacker senior health editor Beth Skwarecki has explained before, walking is cardio—and it's actually a pretty good form of it, too. How fast you walk can even be used to measure your health and capacity. Different intensities of cardio do different things for your body, but at its most basic level, walking still burns calories. It's also a great, easy way to work a little extra movement into your life, especially if you're a fitness beginner or have an injury. The catalyst for me checking out Peloton's walking offerings was actually my mom being "prescribed" walking as a treatment for an issue she's been having with her back. The issue prevented her from walking long periods of time or walking fast, so after addressing it medically with doctors and physical therapists, her at-home assignment was to walk longer and longer durations on a walking pad in the living room.As an able-bodied person living in a walkable city, I have definitely taken the ability to walk for granted. I decided to check out Peloton's walking workouts to see if they'd be useful for my mom—but they ended up being useful for me. What are Peloton's walking workouts all about?To find walking workouts on the Peloton app, select Walking from the top of your home screen or type "walking" into the search bar. Peloton's walking workouts are designed for use on their Tread treadmills—but I've found that I enjoy them just as much if I go outside, although I obviously can't control the incline if I do that. The guided walks available in the app are like any class Peloton offers: They come in a variety of lengths and formats, are led by a certified instructor who encourages you and reminds you of safety cues, and feature playlists of music that keep the energy going. I start off nearly every weekday morning by walking to Dunkin' Donuts and then to the post office to drop off whatever I've sold on resale apps, so I queue up a Peloton walk for my journey. While I don't necessarily need to have an instructor in my ears reminding me to, well, walk, it encourages me to keep my pace up; I just ignore whatever they're saying about messing with incline and resistance buttons, as I'm not on a treadmill. This morning, I walked along with a five-minute warmup walk routine from instructor Logan Aldridge, who shared encouraging reminders that walking, even if it feels easy, is "massively worth it" for a person's health. He also gave speed cues using practical, real-world examples instead of just relying on cues built around treadmill functions. At one point, he described the pace goal as "not Manhattan walking, not New York City walking," which was funny because I was, in fact, Manhattan-walking my way to a Dunkin', so I slowed down a bit. You can enable location sharing for more accurate measurements and, of course, I have my Apple Watch paired with my Peloton app to give me better data on my heart rate, output, and speed, too. I forgot to enable my location tracking at the beginning of the walk, so at the end, it prompted me to enter in my distance walked for better measurements. I glanced at my watch, which told me how far I'd walked, entered in that number, and was taken to a screen where I could review my output. You can absolutely do this on a treadmill, too, and the workouts are more or less designed for you to. There are live classes available, which enter the on-demand archive when they're finished, and you can choose from cool-down walks, power walks, hikes, walks set to certain kinds of playlists, or even "walk & talk" walks that have two instructors if you like that chatty, podcast kind of feel. Some classes feature walking and running and their titles tell you that upfront. As you're scrolling the options, you'll mostly see title cards with instructors on Treads in the Peloton studio, but you'll also see a few where the instructors are outside. These guided walks are designed more for outdoor walks and the instructors will call out the half-way point so you always know when to turn around and head home. The workouts come in all kinds of lengths, from five minutes up to 75, with the longer ones often incorporating both walking and running. Why I like Peloton's walking workoutsThese workouts are an easy way to slot some extra intentional movement into my day. I'm already walking around a lot, but I'm not always doing it with purpose. Having an instructor reminding me to connect with my steps and a playlist designed to keep me on a certain pace turns a standard coffee run into a mindful exercise. Walking is also low-impact and accessible, so even on a day you're tired or even if other forms of cardio are beyond your reach, this opens up a whole world of fitness opportunities. Perhaps most importantly, this is the most accessible kind of workout on the app because you really don't need anything special. You don't need a floor mat, yoga blocks, or weights, let alone a fancy treadmill. As long as you have some good shoes, you can walk around all you want while still getting the company's signature encouragement and guidance from trained pros. #peloton039s #guided #walk #workouts #are
    LIFEHACKER.COM
    Peloton's Guided Walk Workouts Are Great, Even If You Don't Own a Treadmill
    I never considered myself a walking girl. I never engaged in the "hot girl walk" trends on social media or went on "mental health walks" during the pandemic lockdown. In fact, I long thought walking—the milestone most of us reach as babies, the activity the majority of us do each day to accomplish all the other basic tasks of living—had a little too much PR hype, especially after learning that the much-ballyhooed "10,000 steps" we're supposed to take every day relied on an arbitrary, made-up figure for marketing pedometers. If I am going to do cardio, I reasoned, I'm going to do cardio: cycling, running, swimming, or playing sports with my friends. If I'm not sweating, what's the use? After trying out Peloton's guided walks, available in the at-home fitness giant's incredibly versatile app, I have learned the use. I am now, finally, a walking girl. Is walking good cardio?The reductive view I formerly held of cardio—that I have to be sweaty and tired for it to matter—is and was always false, which I knew, intellectually. As Lifehacker senior health editor Beth Skwarecki has explained before, walking is cardio—and it's actually a pretty good form of it, too. How fast you walk can even be used to measure your health and capacity. Different intensities of cardio do different things for your body, but at its most basic level, walking still burns calories. It's also a great, easy way to work a little extra movement into your life, especially if you're a fitness beginner or have an injury. The catalyst for me checking out Peloton's walking offerings was actually my mom being "prescribed" walking as a treatment for an issue she's been having with her back. The issue prevented her from walking long periods of time or walking fast, so after addressing it medically with doctors and physical therapists, her at-home assignment was to walk longer and longer durations on a walking pad in the living room.As an able-bodied person living in a walkable city (by which I mean a city where I am basically forced to hit my arbitrary 10,000 steps per day whether I want to or not), I have definitely taken the ability to walk for granted. I decided to check out Peloton's walking workouts to see if they'd be useful for my mom—but they ended up being useful for me. What are Peloton's walking workouts all about?To find walking workouts on the Peloton app, select Walking from the top of your home screen or type "walking" into the search bar. Peloton's walking workouts are designed for use on their Tread treadmills (or any treadmill, really)—but I've found that I enjoy them just as much if I go outside, although I obviously can't control the incline if I do that. The guided walks available in the app are like any class Peloton offers: They come in a variety of lengths and formats, are led by a certified instructor who encourages you and reminds you of safety cues, and feature playlists of music that keep the energy going. I start off nearly every weekday morning by walking to Dunkin' Donuts and then to the post office to drop off whatever I've sold on resale apps, so I queue up a Peloton walk for my journey. While I don't necessarily need to have an instructor in my ears reminding me to, well, walk, it encourages me to keep my pace up; I just ignore whatever they're saying about messing with incline and resistance buttons, as I'm not on a treadmill. This morning, I walked along with a five-minute warmup walk routine from instructor Logan Aldridge, who shared encouraging reminders that walking, even if it feels easy, is "massively worth it" for a person's health. He also gave speed cues using practical, real-world examples instead of just relying on cues built around treadmill functions. At one point, he described the pace goal as "not Manhattan walking, not New York City walking," which was funny because I was, in fact, Manhattan-walking my way to a Dunkin', so I slowed down a bit. You can enable location sharing for more accurate measurements and, of course, I have my Apple Watch paired with my Peloton app to give me better data on my heart rate, output, and speed, too. I forgot to enable my location tracking at the beginning of the walk (I don't have it set to automatically track that, though you can), so at the end, it prompted me to enter in my distance walked for better measurements. I glanced at my watch, which told me how far I'd walked, entered in that number, and was taken to a screen where I could review my output. You can absolutely do this on a treadmill, too, and the workouts are more or less designed for you to. There are live classes available, which enter the on-demand archive when they're finished, and you can choose from cool-down walks, power walks, hikes, walks set to certain kinds of playlists (like '90s music or EDM), or even "walk & talk" walks that have two instructors if you like that chatty, podcast kind of feel. Some classes feature walking and running and their titles tell you that upfront. As you're scrolling the options, you'll mostly see title cards with instructors on Treads in the Peloton studio, but you'll also see a few where the instructors are outside. These guided walks are designed more for outdoor walks and the instructors will call out the half-way point so you always know when to turn around and head home (or back to the office or whatever). The workouts come in all kinds of lengths, from five minutes up to 75, with the longer ones often incorporating both walking and running. Why I like Peloton's walking workoutsThese workouts are an easy way to slot some extra intentional movement into my day. I'm already walking around a lot, but I'm not always doing it with purpose. Having an instructor reminding me to connect with my steps and a playlist designed to keep me on a certain pace turns a standard coffee run into a mindful exercise. Walking is also low-impact and accessible, so even on a day you're tired or even if other forms of cardio are beyond your reach, this opens up a whole world of fitness opportunities. Perhaps most importantly, this is the most accessible kind of workout on the app because you really don't need anything special. You don't need a floor mat, yoga blocks, or weights, let alone a fancy treadmill. As long as you have some good shoes, you can walk around all you want while still getting the company's signature encouragement and guidance from trained pros.
    0 Комментарии 0 Поделились
  • New Ontario bills gut environmental protections, eliminate green building bylaws

    The Legislative Assembly of Ontario, from www.ola.org
     
    Two recent bills introduced in the Ontario Legislature are poised to gut environmental protections, and severely curb the authority of municipal planners. Here’s a summary of the tabled bills 5 and 17, focused on areas of relevance to architects.
    Bill 5: Repealing the Endangered Species Act, introducing regulation-free Special Economic Zones
    The omnibus Bill 5, Protect Ontario by Unleashing our Economy Act, 2025, is ostensibly aimed at stimulating the economy by removing barriers to development.
    One of its key components is replacing the province’s Endangered Species Act with a hollowed-out Species Conservation Act. The new act allows the government to pick and choose which species are protected, and narrowly defines their “habitat” as the nest or den of an animal—not the broader feeding grounds, forests, or wetlands they need to survive.
    Developers must currently apply for a permit when their projects threaten a species or habitat, and these applications are reviewed by environmental experts. This process would be replaced by an online registration form; when the form is submitted, a company is free to start building, including damaging or destroying habitats of listed specied, so long as the activity is registered.  The new Species Conservation Act will completely exclude migratory birds and certain aquatic species.
    “It’s a developer’s dream and an environmental nightmare,” writes environmental lawyers Ecojustice.
    Bill 5 also contains provisions for creating Special Economic Zones, where provincial and municipal laws do not apply—a status that the Province could claim for any project or proponent. This would allow work on these projects to be exempt from zoning regulations and approvals, as well as from labour laws, health and safety laws, traffic and speeding laws, and even laws preventing trespassing on private property, notes advocacy group Environmental Defence.
    The Bill specifically exempts the Ontario Place redevelopment from the Environmental Bill of Rights. As a result, explains lawyers from Dentons, “the public will not receive notice of, or have opportunity to, comment on proposals, decisions, or events that could affect the environment as it relates to the Ontario Place Redevelopment Project.”
    Advocacy group Ontario Place For All writes: “The introduction of this clause is a clear response to the overwhelming number of comments—over 2200—from the community to the Environmental Registry of Ontario regarding the Ford government’s application to cut an existing combined sewer overflowthat will be in the way of Therme’s planned beach. The application has the CSO emptying into the west channel inside the breakwater and potentially allowing raw sewage into an area used recreationally by rowers, paddlers, swimmers, and for water shows by the CNE. The Auditor General’s Report estimated the cost of moving the CSO to be approximately million.”
    The Bill also amends the Ontario Heritage Act, allowing the Province to exempt properties from archaeological and heritage conservation requirements if they could potentially advance provincial priorities including, but not limited to, transit, housing, health, long-term care, or infrastructure.
    Another part of the bill would damage the clean energy transition, notes Environmental Defense. “Bill 5 would enable the government to ban all parts of energy projects that come from abroad, especially China. China makes the majority of solar panels, wind turbinesand control systems in the world,” it writes. “If enacted, Bill 5 would likely end solar power installation in Ontario and deprive Ontarians access to the cleanest source of new electricity available.”
    In the Legislature, Liberal member Ted Tsu noted, “They called this bill, Bill 5, the Protect Ontario by Unleashing our Economy Act. However, upon studying the bill, I think a more appropriate short title would be ‘don’t protect Ontario and use tariffs as cover to unleash lobbying act.’ That is a summary of what I think is wrong in principle with Bill 5.”
    Bill 5 has undergone its second reading and will be the subject of a Standing Committee hearing.

    Bill 17: Striking down green development standards, paring down planning applications
    Bill 17: Protecting Ontario by Building Faster and Smarter Act, 2025 aims to dismantle the City of Toronto’s Green Building Bylaw, along with limiting municipal authority in planning processes. These changes are proposed in the ostensible interest of speeding up construction in order to lower housing costs.
    The bill states that municipalities must follow the Building Code, and prohibits them for passing by-laws or imposing construction standards that exceed those set out in the Building Code. This seems to deliver a major win to development group RESCON, which has been lobbying to strike down the Toronto Green Standard.
    Fifteen municipalities in the Greater Toronto Area currently have green development standards. Non-profit group The Atmospheric Fundnotes that green standards do not slow housing construction. “In 2023, Toronto exceeded its housing targets by 51%, with nearly 96% of housing starts being subject to the Toronto Green Standard. Overall, Toronto’s housing starts have grown or stayed consistent nearly every year since the TGS was implemented.” The group also notes that the Ontario Building Code’s energy efficiency requirements have not been updated since 2017, and that Ontario’s cities will not meet their climate targets without more progressive pathways to low-carbon construction.
    Also of direct impact to architects is the proposed standardization of requirements for “complete” planning applications. Under the tabled bill, the Minister of Municipal Affairs and Housing will have the power to govern what information or material is requiredin connection with official plan amendments, zoning by-law amendments, site plan approval, draft plans of subdivisions, and consent applications. This would prevail over existing Official Plan requirements. Currently, the Ontario government is proposing that sun/shadow, wind, urban design and lighting studies would not be required as part of a complete planning application.
    The bills would also deem an application to be complete not when it’s accepted by a municipal planning authority, but solely on the basis of it being prepared by prescribed professional. The prescribed professions are not yet defined, but the government has cited Engineers as an example.
    Bill 17 proposes to grant minor variances “as of right” so long that they fall with a certain percentage of current setback regulations.This would apply to urban residential lands outside of the Greenbelt.
    The Bill proposes amendments to the Development Charges Act that will change what municipalities can charge, including eliminating development charges for long-term care homes. The bill limits Inclusionary Zoning to apply to a maximum 5% set-aside rate, and a maximum 25-year period of affordability.
    Dentons notes that: “While not specifically provided for in Bill 17, the Technical Briefing suggests that, the Minister of Infrastructure will have authority to approve MZOs, an authority currently held only by the Minister of Municipal Affairs and Housing.”
    Environmental Defense’s Phil Pothen writes: “Some of the measures proposed in Bill 17—like deferring development charges—could help build smarter and faster if they were applied selectively to infill, mid-rise and multiplex housing. But the bill’s current language would apply these changes to sprawl and McMansion development as well.”
    He adds: “Bill 17 also includes provisions that seem aimed at erasing municipal urban rules and green building standards, imposing generic road-design standards on urban and suburban streets and preventing urban design. Those changes could actually make it harder to speed up housing—reversing progress toward more efficient construction and land use and the modes of transportation that support them.”
    The Bill would also amend the Building Code to eliminate the need for a secondary provincial approval of innovative construction products if they have already been examined by the Canadian Construction Materials Centre of the National Research Council of Canada.
    The Ontario government is currently seeking comment on their proposed regulation to standardize complete application requirements. They are also currently seeking comment on the proposed regulation that provides for as-of-rights within 10% of current required setbacks. These comment periods are open until June 26, 2025.

    The post New Ontario bills gut environmental protections, eliminate green building bylaws appeared first on Canadian Architect.
    #new #ontario #bills #gut #environmental
    New Ontario bills gut environmental protections, eliminate green building bylaws
    The Legislative Assembly of Ontario, from www.ola.org   Two recent bills introduced in the Ontario Legislature are poised to gut environmental protections, and severely curb the authority of municipal planners. Here’s a summary of the tabled bills 5 and 17, focused on areas of relevance to architects. Bill 5: Repealing the Endangered Species Act, introducing regulation-free Special Economic Zones The omnibus Bill 5, Protect Ontario by Unleashing our Economy Act, 2025, is ostensibly aimed at stimulating the economy by removing barriers to development. One of its key components is replacing the province’s Endangered Species Act with a hollowed-out Species Conservation Act. The new act allows the government to pick and choose which species are protected, and narrowly defines their “habitat” as the nest or den of an animal—not the broader feeding grounds, forests, or wetlands they need to survive. Developers must currently apply for a permit when their projects threaten a species or habitat, and these applications are reviewed by environmental experts. This process would be replaced by an online registration form; when the form is submitted, a company is free to start building, including damaging or destroying habitats of listed specied, so long as the activity is registered.  The new Species Conservation Act will completely exclude migratory birds and certain aquatic species. “It’s a developer’s dream and an environmental nightmare,” writes environmental lawyers Ecojustice. Bill 5 also contains provisions for creating Special Economic Zones, where provincial and municipal laws do not apply—a status that the Province could claim for any project or proponent. This would allow work on these projects to be exempt from zoning regulations and approvals, as well as from labour laws, health and safety laws, traffic and speeding laws, and even laws preventing trespassing on private property, notes advocacy group Environmental Defence. The Bill specifically exempts the Ontario Place redevelopment from the Environmental Bill of Rights. As a result, explains lawyers from Dentons, “the public will not receive notice of, or have opportunity to, comment on proposals, decisions, or events that could affect the environment as it relates to the Ontario Place Redevelopment Project.” Advocacy group Ontario Place For All writes: “The introduction of this clause is a clear response to the overwhelming number of comments—over 2200—from the community to the Environmental Registry of Ontario regarding the Ford government’s application to cut an existing combined sewer overflowthat will be in the way of Therme’s planned beach. The application has the CSO emptying into the west channel inside the breakwater and potentially allowing raw sewage into an area used recreationally by rowers, paddlers, swimmers, and for water shows by the CNE. The Auditor General’s Report estimated the cost of moving the CSO to be approximately million.” The Bill also amends the Ontario Heritage Act, allowing the Province to exempt properties from archaeological and heritage conservation requirements if they could potentially advance provincial priorities including, but not limited to, transit, housing, health, long-term care, or infrastructure. Another part of the bill would damage the clean energy transition, notes Environmental Defense. “Bill 5 would enable the government to ban all parts of energy projects that come from abroad, especially China. China makes the majority of solar panels, wind turbinesand control systems in the world,” it writes. “If enacted, Bill 5 would likely end solar power installation in Ontario and deprive Ontarians access to the cleanest source of new electricity available.” In the Legislature, Liberal member Ted Tsu noted, “They called this bill, Bill 5, the Protect Ontario by Unleashing our Economy Act. However, upon studying the bill, I think a more appropriate short title would be ‘don’t protect Ontario and use tariffs as cover to unleash lobbying act.’ That is a summary of what I think is wrong in principle with Bill 5.” Bill 5 has undergone its second reading and will be the subject of a Standing Committee hearing. Bill 17: Striking down green development standards, paring down planning applications Bill 17: Protecting Ontario by Building Faster and Smarter Act, 2025 aims to dismantle the City of Toronto’s Green Building Bylaw, along with limiting municipal authority in planning processes. These changes are proposed in the ostensible interest of speeding up construction in order to lower housing costs. The bill states that municipalities must follow the Building Code, and prohibits them for passing by-laws or imposing construction standards that exceed those set out in the Building Code. This seems to deliver a major win to development group RESCON, which has been lobbying to strike down the Toronto Green Standard. Fifteen municipalities in the Greater Toronto Area currently have green development standards. Non-profit group The Atmospheric Fundnotes that green standards do not slow housing construction. “In 2023, Toronto exceeded its housing targets by 51%, with nearly 96% of housing starts being subject to the Toronto Green Standard. Overall, Toronto’s housing starts have grown or stayed consistent nearly every year since the TGS was implemented.” The group also notes that the Ontario Building Code’s energy efficiency requirements have not been updated since 2017, and that Ontario’s cities will not meet their climate targets without more progressive pathways to low-carbon construction. Also of direct impact to architects is the proposed standardization of requirements for “complete” planning applications. Under the tabled bill, the Minister of Municipal Affairs and Housing will have the power to govern what information or material is requiredin connection with official plan amendments, zoning by-law amendments, site plan approval, draft plans of subdivisions, and consent applications. This would prevail over existing Official Plan requirements. Currently, the Ontario government is proposing that sun/shadow, wind, urban design and lighting studies would not be required as part of a complete planning application. The bills would also deem an application to be complete not when it’s accepted by a municipal planning authority, but solely on the basis of it being prepared by prescribed professional. The prescribed professions are not yet defined, but the government has cited Engineers as an example. Bill 17 proposes to grant minor variances “as of right” so long that they fall with a certain percentage of current setback regulations.This would apply to urban residential lands outside of the Greenbelt. The Bill proposes amendments to the Development Charges Act that will change what municipalities can charge, including eliminating development charges for long-term care homes. The bill limits Inclusionary Zoning to apply to a maximum 5% set-aside rate, and a maximum 25-year period of affordability. Dentons notes that: “While not specifically provided for in Bill 17, the Technical Briefing suggests that, the Minister of Infrastructure will have authority to approve MZOs, an authority currently held only by the Minister of Municipal Affairs and Housing.” Environmental Defense’s Phil Pothen writes: “Some of the measures proposed in Bill 17—like deferring development charges—could help build smarter and faster if they were applied selectively to infill, mid-rise and multiplex housing. But the bill’s current language would apply these changes to sprawl and McMansion development as well.” He adds: “Bill 17 also includes provisions that seem aimed at erasing municipal urban rules and green building standards, imposing generic road-design standards on urban and suburban streets and preventing urban design. Those changes could actually make it harder to speed up housing—reversing progress toward more efficient construction and land use and the modes of transportation that support them.” The Bill would also amend the Building Code to eliminate the need for a secondary provincial approval of innovative construction products if they have already been examined by the Canadian Construction Materials Centre of the National Research Council of Canada. The Ontario government is currently seeking comment on their proposed regulation to standardize complete application requirements. They are also currently seeking comment on the proposed regulation that provides for as-of-rights within 10% of current required setbacks. These comment periods are open until June 26, 2025. The post New Ontario bills gut environmental protections, eliminate green building bylaws appeared first on Canadian Architect. #new #ontario #bills #gut #environmental
    WWW.CANADIANARCHITECT.COM
    New Ontario bills gut environmental protections, eliminate green building bylaws
    The Legislative Assembly of Ontario, from www.ola.org   Two recent bills introduced in the Ontario Legislature are poised to gut environmental protections, and severely curb the authority of municipal planners. Here’s a summary of the tabled bills 5 and 17, focused on areas of relevance to architects. Bill 5: Repealing the Endangered Species Act, introducing regulation-free Special Economic Zones The omnibus Bill 5, Protect Ontario by Unleashing our Economy Act, 2025, is ostensibly aimed at stimulating the economy by removing barriers to development. One of its key components is replacing the province’s Endangered Species Act with a hollowed-out Species Conservation Act. The new act allows the government to pick and choose which species are protected, and narrowly defines their “habitat” as the nest or den of an animal—not the broader feeding grounds, forests, or wetlands they need to survive. Developers must currently apply for a permit when their projects threaten a species or habitat, and these applications are reviewed by environmental experts. This process would be replaced by an online registration form; when the form is submitted, a company is free to start building, including damaging or destroying habitats of listed specied, so long as the activity is registered.  The new Species Conservation Act will completely exclude migratory birds and certain aquatic species. “It’s a developer’s dream and an environmental nightmare,” writes environmental lawyers Ecojustice. Bill 5 also contains provisions for creating Special Economic Zones, where provincial and municipal laws do not apply—a status that the Province could claim for any project or proponent. This would allow work on these projects to be exempt from zoning regulations and approvals, as well as from labour laws, health and safety laws, traffic and speeding laws, and even laws preventing trespassing on private property, notes advocacy group Environmental Defence. The Bill specifically exempts the Ontario Place redevelopment from the Environmental Bill of Rights. As a result, explains lawyers from Dentons, “the public will not receive notice of, or have opportunity to, comment on proposals, decisions, or events that could affect the environment as it relates to the Ontario Place Redevelopment Project.” Advocacy group Ontario Place For All writes: “The introduction of this clause is a clear response to the overwhelming number of comments—over 2200—from the community to the Environmental Registry of Ontario regarding the Ford government’s application to cut an existing combined sewer overflow (CSO) that will be in the way of Therme’s planned beach. The application has the CSO emptying into the west channel inside the breakwater and potentially allowing raw sewage into an area used recreationally by rowers, paddlers, swimmers, and for water shows by the CNE. The Auditor General’s Report estimated the cost of moving the CSO to be approximately $60 million.” The Bill also amends the Ontario Heritage Act, allowing the Province to exempt properties from archaeological and heritage conservation requirements if they could potentially advance provincial priorities including, but not limited to, transit, housing, health, long-term care, or infrastructure. Another part of the bill would damage the clean energy transition, notes Environmental Defense. “Bill 5 would enable the government to ban all parts of energy projects that come from abroad, especially China. China makes the majority of solar panels (over 80 per cent), wind turbines (around 60 per cent) and control systems in the world,” it writes. “If enacted, Bill 5 would likely end solar power installation in Ontario and deprive Ontarians access to the cleanest source of new electricity available.” In the Legislature, Liberal member Ted Tsu noted, “They called this bill, Bill 5, the Protect Ontario by Unleashing our Economy Act. However, upon studying the bill, I think a more appropriate short title would be ‘don’t protect Ontario and use tariffs as cover to unleash lobbying act.’ That is a summary of what I think is wrong in principle with Bill 5.” Bill 5 has undergone its second reading and will be the subject of a Standing Committee hearing. Bill 17: Striking down green development standards, paring down planning applications Bill 17: Protecting Ontario by Building Faster and Smarter Act, 2025 aims to dismantle the City of Toronto’s Green Building Bylaw, along with limiting municipal authority in planning processes. These changes are proposed in the ostensible interest of speeding up construction in order to lower housing costs. The bill states that municipalities must follow the Building Code, and prohibits them for passing by-laws or imposing construction standards that exceed those set out in the Building Code. This seems to deliver a major win to development group RESCON, which has been lobbying to strike down the Toronto Green Standard. Fifteen municipalities in the Greater Toronto Area currently have green development standards. Non-profit group The Atmospheric Fund (TAF) notes that green standards do not slow housing construction. “In 2023, Toronto exceeded its housing targets by 51%, with nearly 96% of housing starts being subject to the Toronto Green Standard. Overall, Toronto’s housing starts have grown or stayed consistent nearly every year since the TGS was implemented.” The group also notes that the Ontario Building Code’s energy efficiency requirements have not been updated since 2017, and that Ontario’s cities will not meet their climate targets without more progressive pathways to low-carbon construction. Also of direct impact to architects is the proposed standardization of requirements for “complete” planning applications. Under the tabled bill, the Minister of Municipal Affairs and Housing will have the power to govern what information or material is required (or prohibited) in connection with official plan amendments, zoning by-law amendments, site plan approval, draft plans of subdivisions, and consent applications. This would prevail over existing Official Plan requirements. Currently, the Ontario government is proposing that sun/shadow, wind, urban design and lighting studies would not be required as part of a complete planning application. The bills would also deem an application to be complete not when it’s accepted by a municipal planning authority, but solely on the basis of it being prepared by prescribed professional. The prescribed professions are not yet defined, but the government has cited Engineers as an example. Bill 17 proposes to grant minor variances “as of right” so long that they fall with a certain percentage of current setback regulations. (They are currently proposing 10%.) This would apply to urban residential lands outside of the Greenbelt. The Bill proposes amendments to the Development Charges Act that will change what municipalities can charge, including eliminating development charges for long-term care homes. The bill limits Inclusionary Zoning to apply to a maximum 5% set-aside rate, and a maximum 25-year period of affordability. Dentons notes that: “While not specifically provided for in Bill 17, the Technical Briefing suggests that, the Minister of Infrastructure will have authority to approve MZOs, an authority currently held only by the Minister of Municipal Affairs and Housing.” Environmental Defense’s Phil Pothen writes: “Some of the measures proposed in Bill 17—like deferring development charges—could help build smarter and faster if they were applied selectively to infill, mid-rise and multiplex housing. But the bill’s current language would apply these changes to sprawl and McMansion development as well.” He adds: “Bill 17 also includes provisions that seem aimed at erasing municipal urban rules and green building standards, imposing generic road-design standards on urban and suburban streets and preventing urban design. Those changes could actually make it harder to speed up housing—reversing progress toward more efficient construction and land use and the modes of transportation that support them.” The Bill would also amend the Building Code to eliminate the need for a secondary provincial approval of innovative construction products if they have already been examined by the Canadian Construction Materials Centre of the National Research Council of Canada. The Ontario government is currently seeking comment on their proposed regulation to standardize complete application requirements. They are also currently seeking comment on the proposed regulation that provides for as-of-rights within 10% of current required setbacks. These comment periods are open until June 26, 2025. The post New Ontario bills gut environmental protections, eliminate green building bylaws appeared first on Canadian Architect.
    0 Комментарии 0 Поделились
  • Essex Police discloses ‘incoherent’ facial recognition assessment

    Essex Police has not properly considered the potentially discriminatory impacts of its live facial recognitionuse, according to documents obtained by Big Brother Watch and shared with Computer Weekly.
    While the force claims in an equality impact assessmentthat “Essex Police has carefully considered issues regarding bias and algorithmic injustice”, privacy campaign group Big Brother Watch said the document – obtained under Freedom of Informationrules – shows it has likely failed to fulfil its public sector equality dutyto consider how its policies and practices could be discriminatory.
    The campaigners highlighted how the force is relying on false comparisons to other algorithms and “parroting misleading claims” from the supplier about the LFR system’s lack of bias.
    For example, Essex Police said that when deploying LFR, it will set the system threshold “at 0.6 or above, as this is the level whereby equitability of the rate of false positive identification across all demographics is achieved”.
    However, this figure is based on the National Physical Laboratory’stesting of NEC’s Neoface V4 LFR algorithm deployed by the Metropolitan Police and South Wales Police, which Essex Police does not use.
    Instead, Essex Police has opted to use an algorithm developed by Israeli biometrics firm Corsight, whose chief privacy officer, Tony Porter, was formerly the UK’s surveillance camera commissioner until January 2021.
    Highlighting testing of the Corsight_003 algorithm conducted in June 2022 by the US National Institute of Standards and Technology, the EIA also claims it has “a bias differential FMRof 0.0006 overall, the lowest of any tested within NIST at the time of writing, according to the supplier”.
    However, looking at the NIST website, where all of the testing data is publicly shared, there is no information to support the figure cited by Corsight, or its claim to essentially have the least biased algorithm available.
    A separate FoI response to Big Brother Watch confirmed that, as of 16 January 2025, Essex Police had not conducted any “formal or detailed” testing of the system itself, or otherwise commissioned a third party to do so.

    Essex Police's lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk

    Jake Hurfurt, Big Brother Watch

    “Looking at Essex Police’s EIA, we are concerned about the force’s compliance with its duties under equality law, as the reliance on shaky evidence seriously undermines the force’s claims about how the public will be protected against algorithmic bias,” said Jake Hurfurt, head of research and investigations at Big Brother Watch.
    “Essex Police’s lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk. This slapdash scrutiny of their intrusive facial recognition system sets a worrying precedent.
    “Facial recognition is notorious for misidentifying women and people of colour, and Essex Police’s willingness to deploy the technology without testing it themselves raises serious questions about the force’s compliance with equalities law. Essex Police should immediately stop their use of facial recognition surveillance.”
    The need for UK police forces deploying facial recognition to consider how their use of the technology could be discriminatory was highlighted by a legal challenge brought against South Wales Police by Cardiff resident Ed Bridges.
    In August 2020, the UK Court of Appeal ruled that the use of LFR by the force was unlawful because the privacy violations it entailed were “not in accordance” with legally permissible restrictions on Bridges’ Article 8 privacy rights; it did not conduct an appropriate data protection impact assessment; and it did not comply with its PSED to consider how its policies and practices could be discriminatory.
    The judgment specifically found that the PSED is a “duty of process and not outcome”, and requires public bodies to take reasonable steps “to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex”.
    Big Brother Watch said equality assessments must rely on “sufficient quality evidence” to back up the claims being made and ultimately satisfy the PSED, but that the documents obtained do not demonstrate the force has had “due regard” for equalities.
    Academic Karen Yeung, an interdisciplinary professor at Birmingham Law School and School of Computer Science, told Computer Weekly that, in her view, the EIA is “clearly inadequate”.
    She also criticised the document for being “incoherent”, failing to look at the systemic equalities impacts of the technology, and relying exclusively on testing of entirely different software algorithms used by other police forces trained on different populations: “This does not, in my view, fulfil the requirements of the public sector equality duty. It is a document produced from a cut-and-paste exercise from the largely irrelevant material produced by others.”

    Computer Weekly contacted Essex Police about every aspect of the story.
    “We take our responsibility to meet our public sector equality duty very seriously, and there is a contractual requirement on our LFR partner to ensure sufficient testing has taken place to ensure the software meets the specification and performance outlined in the tender process,” said a spokesperson.
    “There have been more than 50 deployments of our LFR vans, scanning 1.7 million faces, which have led to more than 200 positive alerts, and nearly 70 arrests.
    “To date, there has been one false positive, which, when reviewed, was established to be as a result of a low-quality photo uploaded onto the watchlist and not the result of bias issues with the technology. This did not lead to an arrest or any other unlawful action because of the procedures in place to verify all alerts. This issue has been resolved to ensure it does not occur again.”
    The spokesperson added that the force is also committed to carrying out further assessment of the software and algorithms, with the evaluation of deployments and results being subject to an independent academic review.
    “As part of this, we have carried out, and continue to do so, testing and evaluation activity in conjunction with the University of Cambridge. The NPL have recently agreed to carry out further independent testing, which will take place over the summer. The company have also achieved an ISO 42001 certification,” said the spokesperson. “We are also liaising with other technical specialists regarding further testing and evaluation activity.”
    However, the force did not comment on why it was relying on the testing of a completely different algorithm in its EIA, or why it had not conducted or otherwise commissioned its own testing before operationally deploying the technology in the field.
    Computer Weekly followed up Essex Police for clarification on when the testing with Cambridge began, as this is not mentioned in the EIA, but received no response by time of publication.

    Although Essex Police and Corsight claim the facial recognition algorithm in use has “a bias differential FMR of 0.0006 overall, the lowest of any tested within NIST at the time of writing”, there is no publicly available data on NIST’s website to support this claim.
    Drilling down into the demographic split of false positive rates shows, for example, that there is a factor of 100 more false positives in West African women than for Eastern European men.
    While this is an improvement on the previous two algorithms submitted for testing by Corsight, other publicly available data held by NIST undermines Essex Police’s claim in the EIA that the “algorithm is identified by NIST as having the lowest bias variance between demographics”.
    Looking at another metric held by NIST – FMR Max/Min, which refers to the ratio between demographic groups that give the most and least false positives – it essentially represents how inequitable the error rates are across different age groups, sexes and ethnicities.
    In this instance, smaller values represent better performance, with the ratio being an estimate of how many times more false positives can be expected in one group over another.
    According to the NIST webpage for “demographic effects” in facial recognition algorithms, the Corsight algorithm has an FMR Max/Min of 113, meaning there are at least 21 algorithms that display less bias. For comparison, the least biased algorithm according to NIST results belongs to a firm called Idemia, which has an FMR Max/Min of 5.
    However, like Corsight, the highest false match rate for Idemia’s algorithm was for older West African women. Computer Weekly understands this is a common problem with many of the facial recognition algorithms NIST tests because this group is not typically well-represented in the underlying training data of most firms.
    Computer Weekly also confirmed with NIST that the FMR metric cited by Corsight relates to one-to-one verification, rather than the one-to-many situation police forces would be using it in.
    This is a key distinction, because if 1,000 people are enrolled in a facial recognition system that was built on one-to-one verification, then the false positive rate will be 1,000 times larger than the metrics held by NIST for FMR testing.
    “If a developer implements 1:Nsearch as N 1:1 comparisons, then the likelihood of a false positive from a search is expected to be proportional to the false match for the 1:1 comparison algorithm,” said NIST scientist Patrick Grother. “Some developers do not implement 1:N search that way.”
    Commenting on the contrast between this testing methodology and the practical scenarios the tech will be deployed in, Birmingham Law School’s Yeung said one-to-one is for use in stable environments to provide admission to spaces with limited access, such as airport passport gates, where only one person’s biometric data is scrutinised at a time.
    “One-to-many is entirely different – it’s an entirely different process, an entirely different technical challenge, and therefore cannot typically achieve equivalent levels of accuracy,” she said.
    Computer Weekly contacted Corsight about every aspect of the story related to its algorithmic testing, including where the “0.0006” figure is drawn from and its various claims to have the “least biased” algorithm.
    “The facts presented in your article are partial, manipulated and misleading,” said a company spokesperson. “Corsight AI’s algorithms have been tested by numerous entities, including NIST, and have been proven to be the least biased in the industry in terms of gender and ethnicity. This is a major factor for our commercial and government clients.”
    However, Corsight was either unable or unwilling to specify which facts are “partial, manipulated or misleading” in response to Computer Weekly’s request for clarification.
    Computer Weekly also contacted Corsight about whether it has done any further testing by running N one-to-one comparisons, and whether it has changed the system’s threshold settings for detecting a match to suppress the false positive rate, but received no response on these points.
    While most facial recognition developers submit their algorithms to NIST for testing on an annual or bi-annual basis, Corsight last submitted an algorithm in mid-2022. Computer Weekly contacted Corsight about why this was the case, given that most algorithms in NIST testing show continuous improvement with each submission, but again received no response on this point.

    The Essex Police EIA also highlights testing of the Corsight algorithm conducted in 2022 by the Department of Homeland Security, claiming it demonstrated “Corsight’s capability to perform equally across all demographics”.
    However, Big Brother Watch’s Hurfurt highlighted that the DHS study focused on bias in the context of true positives, and did not assess the algorithm for inequality in false positives.
    This is a key distinction for the testing of LFR systems, as false negatives where the system fails to recognise someone will likely not lead to incorrect stops or other adverse effects, whereas a false positive where the system confuses two people could have more severe consequences for an individual.
    The DHS itself also publicly came out against Corsight’s representation of the test results, after the firm claimed in subsequent marketing materials that “no matter how you look at it, Corsight is ranked #1. #1 in overall recognition, #1 in dark skin, #1 in Asian, #1 in female”.
    Speaking with IVPM in August 2023, DHS said: “We do not know what this claim, being ‘#1’ is referring to.” The department added that the rules of the testing required companies to get their claims cleared through DHS to ensure they do not misrepresent their performance.
    In its breakdown of the test results, IVPM noted that systems of multiple other manufacturers achieved similar results to Corsight. The company did not respond to a request for comment about the DHS testing.
    Computer Weekly contacted Essex Police about all the issues raised around Corsight testing, but received no direct response to these points from the force.

    While Essex Police claimed in its EIA that it “also sought advice from their own independent Data and Digital Ethics Committee in relation to their use of LFR generally”, meeting minutes obtained via FoI rules show that key impacts had not been considered.
    For example, when one panel member questioned how LFR deployments could affect community events or protests, and how the force could avoid the technology having a “chilling presence”, the officer presentsaid “that’s a pretty good point, actually”, adding that he had “made a note” to consider this going forward.
    The EIA itself also makes no mention of community events or protests, and does not specify how different groups could be affected by these different deployment scenarios.
    Elsewhere in the EIA, Essex Police claims that the system is likely to have minimal impact across age, gender and race, citing the 0.6 threshold setting, as well as NIST and DHS testing, as ways of achieving “equitability” across different demographics. Again, this threshold setting relates to a completely different system used by the Met and South Wales Police.
    For each protected characteristic, the EIA has a section on “mitigating” actions that can be taken to reduce adverse impacts.
    While the “ethnicity” section again highlights the National Physical Laboratory’s testing of a completely different algorithm, most other sections note that “any watchlist created will be done so as close to the deployment as possible, therefore hoping to ensure the most accurate and up-to-date images of persons being added are uploaded”.
    However, Yeung noted that the EIA makes no mention of the specific watchlist creation criteria beyond high-level “categories of images” that can be included, and the claimed equality impacts of that process.
    For example, it does not consider how people from certain ethnic minority or religious backgrounds could be disproportionally impacted as a result of their over-representation in police databases, or the issue of unlawful custody image retention whereby the Home Office is continuing to hold millions of custody images illegally in the Police National Database.
    While the ethics panel meeting minutes offer greater insight into how Essex Police is approaching watchlist creation, the custody image retention issue was also not mentioned.
    Responding to Computer Weekly’s questions about the meeting minutes and the lack of scrutiny of key issues related to UK police LFR deployments, an Essex Police spokesperson said: “Our polices and processes around the use of live facial recognition have been carefully scrutinised through a thorough ethics panel.”

    Instead, the officer present explained how watchlists and deployments are decided based on the “intelligence case”, which then has to be justified as both proportionate and necessary.
    On the “Southend intelligence case”, the officer said deploying in the town centre would be permissible because “that’s where the most footfall is, the most opportunity to locate outstanding suspects”.
    They added: “The watchlisthas to be justified by the key elements, the policing purpose. Everything has to be proportionate and strictly necessary to be able to deploy… If the commander in Southend said, ‘I want to put everyone that’s wanted for shoplifting across Essex on the watchlist for Southend’, the answer would be no, because is it necessary? Probably not. Is it proportionate? I don’t think it is. Would it be proportionate to have individuals who are outstanding for shoplifting from the Southend area? Yes, because it’s local.”
    However, the officer also said that, on most occasions, the systems would be deployed to catch “our most serious offenders”, as this would be easier to justify from a public perception point of view. They added that, during the summer, it would be easier to justify deployments because of the seasonal population increase in Southend.
    “We know that there is a general increase in violence during those months. So, we don’t need to go down to the weeds to specifically look at grievous bodily harmor murder or rape, because they’re not necessarily fuelled by a spike in terms of seasonality, for example,” they said.
    “However, we know that because the general population increases significantly, the level of violence increases significantly, which would justify that I could put those serious crimes on that watchlist.”
    Commenting on the responses given to the ethics panel, Yeung said they “failed entirely to provide me with confidence that their proposed deployments will have the required legal safeguards in place”.
    According to the Court of Appeal judgment against South Wales Police in the Bridges case, the force’s facial recognition policy contained “fundamental deficiencies” in relation to the “who” and “where” question of LFR.
    “In relation to both of those questions, too much discretion is currently left to individual police officers,” it said. “It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFRcan be deployed.”
    Yeung added: “The same applies to these responses of Essex Police force, failing to adequately answer the ‘who’ and ‘where’ questions concerning their proposed facial recognition deployments.
    “Worse still, the court stated that a police force’s local policies can only satisfy the requirements that the privacy interventions arising from use of LFR are ‘prescribed by law’ if they are published. The documents were obtained by Big Brother Watch through freedom of information requests, strongly suggesting that these even these basic legal safeguards are not being met.”
    Yeung added that South Wales Police’s use of the technology was found to be unlawful in the Bridges case because there was excessive discretion left in the hands of individual police officers, allowing undue opportunities for arbitrary decision-making and abuses of power.

    Every decision ... must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity. I don’t see any of that happening

    Karen Yeung, Birmingham Law School

    “Every decision – where you will deploy, whose face is placed on the watchlist and why, and the duration of deployment – must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity,” she said.
    “I don’t see any of that happening. There are simply vague claims that ‘we’ll make sure we apply the legal test’, but how? They just offer unsubstantiated promises that ‘we will abide by the law’ without specifying how they will do so by meeting specific legal requirements.”
    Yeung further added these documents indicate that the police force is not looking for specific people wanted for serious crimes, but setting up dragnets for a wide variety of ‘wanted’ individuals, including those wanted for non-serious crimes such as shoplifting.
    “There are many platitudes about being ethical, but there’s nothing concrete indicating how they propose to meet the legal tests of necessity and proportionality,” she said.
    “In liberal democratic societies, every single decision about an individual by the police made without their consent must be justified in accordance with law. That means that the police must be able to justify and defend the reasons why every single person whose face is uploaded to the facial recognition watchlist meets the legal test, based on their specific operational purpose.”
    Yeung concluded that, assuming they can do this, police must also consider the equality impacts of their actions, and how different groups are likely to be affected by their practical deployments: “I don’t see any of that.”
    In response to the concerns raised around watchlist creation, proportionality and necessity, an Essex Police spokesperson said: “The watchlists for each deployment are created to identify specific people wanted for specific crimes and to enforce orders. To date, we have focused on the types of offences which cause the most harm to our communities, including our hardworking businesses.
    “This includes violent crime, drugs, sexual offences and thefts from shops. As a result of our deployments, we have arrested people wanted in connection with attempted murder investigations, high-risk domestic abuse cases, GBH, sexual assault, drug supply and aggravated burglary offences. We have also been able to progress investigations and move closer to securing justice for victims.”

    about police data and technology

    Metropolitan Police to deploy permanent facial recognition tech in Croydon: The Met is set to deploy permanent live facial recognition cameras on street furniture in Croydon from summer 2025, but local councillors say the decision – which has taken place with no community input – will further contribute the over-policing of Black communities.
    UK MoJ crime prediction algorithms raise serious concerns: The Ministry of Justice is using one algorithm to predict people’s risk of reoffending and another to predict who will commit murder, but critics say the profiling in these systems raises ‘serious concerns’ over racism, classism and data inaccuracies.
    UK law enforcement data adequacy at risk: The UK government says reforms to police data protection rules will help to simplify law enforcement data processing, but critics argue the changes will lower protection to the point where the UK risks losing its European data adequacy.
    #essex #police #discloses #incoherent #facial
    Essex Police discloses ‘incoherent’ facial recognition assessment
    Essex Police has not properly considered the potentially discriminatory impacts of its live facial recognitionuse, according to documents obtained by Big Brother Watch and shared with Computer Weekly. While the force claims in an equality impact assessmentthat “Essex Police has carefully considered issues regarding bias and algorithmic injustice”, privacy campaign group Big Brother Watch said the document – obtained under Freedom of Informationrules – shows it has likely failed to fulfil its public sector equality dutyto consider how its policies and practices could be discriminatory. The campaigners highlighted how the force is relying on false comparisons to other algorithms and “parroting misleading claims” from the supplier about the LFR system’s lack of bias. For example, Essex Police said that when deploying LFR, it will set the system threshold “at 0.6 or above, as this is the level whereby equitability of the rate of false positive identification across all demographics is achieved”. However, this figure is based on the National Physical Laboratory’stesting of NEC’s Neoface V4 LFR algorithm deployed by the Metropolitan Police and South Wales Police, which Essex Police does not use. Instead, Essex Police has opted to use an algorithm developed by Israeli biometrics firm Corsight, whose chief privacy officer, Tony Porter, was formerly the UK’s surveillance camera commissioner until January 2021. Highlighting testing of the Corsight_003 algorithm conducted in June 2022 by the US National Institute of Standards and Technology, the EIA also claims it has “a bias differential FMRof 0.0006 overall, the lowest of any tested within NIST at the time of writing, according to the supplier”. However, looking at the NIST website, where all of the testing data is publicly shared, there is no information to support the figure cited by Corsight, or its claim to essentially have the least biased algorithm available. A separate FoI response to Big Brother Watch confirmed that, as of 16 January 2025, Essex Police had not conducted any “formal or detailed” testing of the system itself, or otherwise commissioned a third party to do so. Essex Police's lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk Jake Hurfurt, Big Brother Watch “Looking at Essex Police’s EIA, we are concerned about the force’s compliance with its duties under equality law, as the reliance on shaky evidence seriously undermines the force’s claims about how the public will be protected against algorithmic bias,” said Jake Hurfurt, head of research and investigations at Big Brother Watch. “Essex Police’s lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk. This slapdash scrutiny of their intrusive facial recognition system sets a worrying precedent. “Facial recognition is notorious for misidentifying women and people of colour, and Essex Police’s willingness to deploy the technology without testing it themselves raises serious questions about the force’s compliance with equalities law. Essex Police should immediately stop their use of facial recognition surveillance.” The need for UK police forces deploying facial recognition to consider how their use of the technology could be discriminatory was highlighted by a legal challenge brought against South Wales Police by Cardiff resident Ed Bridges. In August 2020, the UK Court of Appeal ruled that the use of LFR by the force was unlawful because the privacy violations it entailed were “not in accordance” with legally permissible restrictions on Bridges’ Article 8 privacy rights; it did not conduct an appropriate data protection impact assessment; and it did not comply with its PSED to consider how its policies and practices could be discriminatory. The judgment specifically found that the PSED is a “duty of process and not outcome”, and requires public bodies to take reasonable steps “to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex”. Big Brother Watch said equality assessments must rely on “sufficient quality evidence” to back up the claims being made and ultimately satisfy the PSED, but that the documents obtained do not demonstrate the force has had “due regard” for equalities. Academic Karen Yeung, an interdisciplinary professor at Birmingham Law School and School of Computer Science, told Computer Weekly that, in her view, the EIA is “clearly inadequate”. She also criticised the document for being “incoherent”, failing to look at the systemic equalities impacts of the technology, and relying exclusively on testing of entirely different software algorithms used by other police forces trained on different populations: “This does not, in my view, fulfil the requirements of the public sector equality duty. It is a document produced from a cut-and-paste exercise from the largely irrelevant material produced by others.” Computer Weekly contacted Essex Police about every aspect of the story. “We take our responsibility to meet our public sector equality duty very seriously, and there is a contractual requirement on our LFR partner to ensure sufficient testing has taken place to ensure the software meets the specification and performance outlined in the tender process,” said a spokesperson. “There have been more than 50 deployments of our LFR vans, scanning 1.7 million faces, which have led to more than 200 positive alerts, and nearly 70 arrests. “To date, there has been one false positive, which, when reviewed, was established to be as a result of a low-quality photo uploaded onto the watchlist and not the result of bias issues with the technology. This did not lead to an arrest or any other unlawful action because of the procedures in place to verify all alerts. This issue has been resolved to ensure it does not occur again.” The spokesperson added that the force is also committed to carrying out further assessment of the software and algorithms, with the evaluation of deployments and results being subject to an independent academic review. “As part of this, we have carried out, and continue to do so, testing and evaluation activity in conjunction with the University of Cambridge. The NPL have recently agreed to carry out further independent testing, which will take place over the summer. The company have also achieved an ISO 42001 certification,” said the spokesperson. “We are also liaising with other technical specialists regarding further testing and evaluation activity.” However, the force did not comment on why it was relying on the testing of a completely different algorithm in its EIA, or why it had not conducted or otherwise commissioned its own testing before operationally deploying the technology in the field. Computer Weekly followed up Essex Police for clarification on when the testing with Cambridge began, as this is not mentioned in the EIA, but received no response by time of publication. Although Essex Police and Corsight claim the facial recognition algorithm in use has “a bias differential FMR of 0.0006 overall, the lowest of any tested within NIST at the time of writing”, there is no publicly available data on NIST’s website to support this claim. Drilling down into the demographic split of false positive rates shows, for example, that there is a factor of 100 more false positives in West African women than for Eastern European men. While this is an improvement on the previous two algorithms submitted for testing by Corsight, other publicly available data held by NIST undermines Essex Police’s claim in the EIA that the “algorithm is identified by NIST as having the lowest bias variance between demographics”. Looking at another metric held by NIST – FMR Max/Min, which refers to the ratio between demographic groups that give the most and least false positives – it essentially represents how inequitable the error rates are across different age groups, sexes and ethnicities. In this instance, smaller values represent better performance, with the ratio being an estimate of how many times more false positives can be expected in one group over another. According to the NIST webpage for “demographic effects” in facial recognition algorithms, the Corsight algorithm has an FMR Max/Min of 113, meaning there are at least 21 algorithms that display less bias. For comparison, the least biased algorithm according to NIST results belongs to a firm called Idemia, which has an FMR Max/Min of 5. However, like Corsight, the highest false match rate for Idemia’s algorithm was for older West African women. Computer Weekly understands this is a common problem with many of the facial recognition algorithms NIST tests because this group is not typically well-represented in the underlying training data of most firms. Computer Weekly also confirmed with NIST that the FMR metric cited by Corsight relates to one-to-one verification, rather than the one-to-many situation police forces would be using it in. This is a key distinction, because if 1,000 people are enrolled in a facial recognition system that was built on one-to-one verification, then the false positive rate will be 1,000 times larger than the metrics held by NIST for FMR testing. “If a developer implements 1:Nsearch as N 1:1 comparisons, then the likelihood of a false positive from a search is expected to be proportional to the false match for the 1:1 comparison algorithm,” said NIST scientist Patrick Grother. “Some developers do not implement 1:N search that way.” Commenting on the contrast between this testing methodology and the practical scenarios the tech will be deployed in, Birmingham Law School’s Yeung said one-to-one is for use in stable environments to provide admission to spaces with limited access, such as airport passport gates, where only one person’s biometric data is scrutinised at a time. “One-to-many is entirely different – it’s an entirely different process, an entirely different technical challenge, and therefore cannot typically achieve equivalent levels of accuracy,” she said. Computer Weekly contacted Corsight about every aspect of the story related to its algorithmic testing, including where the “0.0006” figure is drawn from and its various claims to have the “least biased” algorithm. “The facts presented in your article are partial, manipulated and misleading,” said a company spokesperson. “Corsight AI’s algorithms have been tested by numerous entities, including NIST, and have been proven to be the least biased in the industry in terms of gender and ethnicity. This is a major factor for our commercial and government clients.” However, Corsight was either unable or unwilling to specify which facts are “partial, manipulated or misleading” in response to Computer Weekly’s request for clarification. Computer Weekly also contacted Corsight about whether it has done any further testing by running N one-to-one comparisons, and whether it has changed the system’s threshold settings for detecting a match to suppress the false positive rate, but received no response on these points. While most facial recognition developers submit their algorithms to NIST for testing on an annual or bi-annual basis, Corsight last submitted an algorithm in mid-2022. Computer Weekly contacted Corsight about why this was the case, given that most algorithms in NIST testing show continuous improvement with each submission, but again received no response on this point. The Essex Police EIA also highlights testing of the Corsight algorithm conducted in 2022 by the Department of Homeland Security, claiming it demonstrated “Corsight’s capability to perform equally across all demographics”. However, Big Brother Watch’s Hurfurt highlighted that the DHS study focused on bias in the context of true positives, and did not assess the algorithm for inequality in false positives. This is a key distinction for the testing of LFR systems, as false negatives where the system fails to recognise someone will likely not lead to incorrect stops or other adverse effects, whereas a false positive where the system confuses two people could have more severe consequences for an individual. The DHS itself also publicly came out against Corsight’s representation of the test results, after the firm claimed in subsequent marketing materials that “no matter how you look at it, Corsight is ranked #1. #1 in overall recognition, #1 in dark skin, #1 in Asian, #1 in female”. Speaking with IVPM in August 2023, DHS said: “We do not know what this claim, being ‘#1’ is referring to.” The department added that the rules of the testing required companies to get their claims cleared through DHS to ensure they do not misrepresent their performance. In its breakdown of the test results, IVPM noted that systems of multiple other manufacturers achieved similar results to Corsight. The company did not respond to a request for comment about the DHS testing. Computer Weekly contacted Essex Police about all the issues raised around Corsight testing, but received no direct response to these points from the force. While Essex Police claimed in its EIA that it “also sought advice from their own independent Data and Digital Ethics Committee in relation to their use of LFR generally”, meeting minutes obtained via FoI rules show that key impacts had not been considered. For example, when one panel member questioned how LFR deployments could affect community events or protests, and how the force could avoid the technology having a “chilling presence”, the officer presentsaid “that’s a pretty good point, actually”, adding that he had “made a note” to consider this going forward. The EIA itself also makes no mention of community events or protests, and does not specify how different groups could be affected by these different deployment scenarios. Elsewhere in the EIA, Essex Police claims that the system is likely to have minimal impact across age, gender and race, citing the 0.6 threshold setting, as well as NIST and DHS testing, as ways of achieving “equitability” across different demographics. Again, this threshold setting relates to a completely different system used by the Met and South Wales Police. For each protected characteristic, the EIA has a section on “mitigating” actions that can be taken to reduce adverse impacts. While the “ethnicity” section again highlights the National Physical Laboratory’s testing of a completely different algorithm, most other sections note that “any watchlist created will be done so as close to the deployment as possible, therefore hoping to ensure the most accurate and up-to-date images of persons being added are uploaded”. However, Yeung noted that the EIA makes no mention of the specific watchlist creation criteria beyond high-level “categories of images” that can be included, and the claimed equality impacts of that process. For example, it does not consider how people from certain ethnic minority or religious backgrounds could be disproportionally impacted as a result of their over-representation in police databases, or the issue of unlawful custody image retention whereby the Home Office is continuing to hold millions of custody images illegally in the Police National Database. While the ethics panel meeting minutes offer greater insight into how Essex Police is approaching watchlist creation, the custody image retention issue was also not mentioned. Responding to Computer Weekly’s questions about the meeting minutes and the lack of scrutiny of key issues related to UK police LFR deployments, an Essex Police spokesperson said: “Our polices and processes around the use of live facial recognition have been carefully scrutinised through a thorough ethics panel.” Instead, the officer present explained how watchlists and deployments are decided based on the “intelligence case”, which then has to be justified as both proportionate and necessary. On the “Southend intelligence case”, the officer said deploying in the town centre would be permissible because “that’s where the most footfall is, the most opportunity to locate outstanding suspects”. They added: “The watchlisthas to be justified by the key elements, the policing purpose. Everything has to be proportionate and strictly necessary to be able to deploy… If the commander in Southend said, ‘I want to put everyone that’s wanted for shoplifting across Essex on the watchlist for Southend’, the answer would be no, because is it necessary? Probably not. Is it proportionate? I don’t think it is. Would it be proportionate to have individuals who are outstanding for shoplifting from the Southend area? Yes, because it’s local.” However, the officer also said that, on most occasions, the systems would be deployed to catch “our most serious offenders”, as this would be easier to justify from a public perception point of view. They added that, during the summer, it would be easier to justify deployments because of the seasonal population increase in Southend. “We know that there is a general increase in violence during those months. So, we don’t need to go down to the weeds to specifically look at grievous bodily harmor murder or rape, because they’re not necessarily fuelled by a spike in terms of seasonality, for example,” they said. “However, we know that because the general population increases significantly, the level of violence increases significantly, which would justify that I could put those serious crimes on that watchlist.” Commenting on the responses given to the ethics panel, Yeung said they “failed entirely to provide me with confidence that their proposed deployments will have the required legal safeguards in place”. According to the Court of Appeal judgment against South Wales Police in the Bridges case, the force’s facial recognition policy contained “fundamental deficiencies” in relation to the “who” and “where” question of LFR. “In relation to both of those questions, too much discretion is currently left to individual police officers,” it said. “It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFRcan be deployed.” Yeung added: “The same applies to these responses of Essex Police force, failing to adequately answer the ‘who’ and ‘where’ questions concerning their proposed facial recognition deployments. “Worse still, the court stated that a police force’s local policies can only satisfy the requirements that the privacy interventions arising from use of LFR are ‘prescribed by law’ if they are published. The documents were obtained by Big Brother Watch through freedom of information requests, strongly suggesting that these even these basic legal safeguards are not being met.” Yeung added that South Wales Police’s use of the technology was found to be unlawful in the Bridges case because there was excessive discretion left in the hands of individual police officers, allowing undue opportunities for arbitrary decision-making and abuses of power. Every decision ... must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity. I don’t see any of that happening Karen Yeung, Birmingham Law School “Every decision – where you will deploy, whose face is placed on the watchlist and why, and the duration of deployment – must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity,” she said. “I don’t see any of that happening. There are simply vague claims that ‘we’ll make sure we apply the legal test’, but how? They just offer unsubstantiated promises that ‘we will abide by the law’ without specifying how they will do so by meeting specific legal requirements.” Yeung further added these documents indicate that the police force is not looking for specific people wanted for serious crimes, but setting up dragnets for a wide variety of ‘wanted’ individuals, including those wanted for non-serious crimes such as shoplifting. “There are many platitudes about being ethical, but there’s nothing concrete indicating how they propose to meet the legal tests of necessity and proportionality,” she said. “In liberal democratic societies, every single decision about an individual by the police made without their consent must be justified in accordance with law. That means that the police must be able to justify and defend the reasons why every single person whose face is uploaded to the facial recognition watchlist meets the legal test, based on their specific operational purpose.” Yeung concluded that, assuming they can do this, police must also consider the equality impacts of their actions, and how different groups are likely to be affected by their practical deployments: “I don’t see any of that.” In response to the concerns raised around watchlist creation, proportionality and necessity, an Essex Police spokesperson said: “The watchlists for each deployment are created to identify specific people wanted for specific crimes and to enforce orders. To date, we have focused on the types of offences which cause the most harm to our communities, including our hardworking businesses. “This includes violent crime, drugs, sexual offences and thefts from shops. As a result of our deployments, we have arrested people wanted in connection with attempted murder investigations, high-risk domestic abuse cases, GBH, sexual assault, drug supply and aggravated burglary offences. We have also been able to progress investigations and move closer to securing justice for victims.” about police data and technology Metropolitan Police to deploy permanent facial recognition tech in Croydon: The Met is set to deploy permanent live facial recognition cameras on street furniture in Croydon from summer 2025, but local councillors say the decision – which has taken place with no community input – will further contribute the over-policing of Black communities. UK MoJ crime prediction algorithms raise serious concerns: The Ministry of Justice is using one algorithm to predict people’s risk of reoffending and another to predict who will commit murder, but critics say the profiling in these systems raises ‘serious concerns’ over racism, classism and data inaccuracies. UK law enforcement data adequacy at risk: The UK government says reforms to police data protection rules will help to simplify law enforcement data processing, but critics argue the changes will lower protection to the point where the UK risks losing its European data adequacy. #essex #police #discloses #incoherent #facial
    WWW.COMPUTERWEEKLY.COM
    Essex Police discloses ‘incoherent’ facial recognition assessment
    Essex Police has not properly considered the potentially discriminatory impacts of its live facial recognition (LFR) use, according to documents obtained by Big Brother Watch and shared with Computer Weekly. While the force claims in an equality impact assessment (EIA) that “Essex Police has carefully considered issues regarding bias and algorithmic injustice”, privacy campaign group Big Brother Watch said the document – obtained under Freedom of Information (FoI) rules – shows it has likely failed to fulfil its public sector equality duty (PSED) to consider how its policies and practices could be discriminatory. The campaigners highlighted how the force is relying on false comparisons to other algorithms and “parroting misleading claims” from the supplier about the LFR system’s lack of bias. For example, Essex Police said that when deploying LFR, it will set the system threshold “at 0.6 or above, as this is the level whereby equitability of the rate of false positive identification across all demographics is achieved”. However, this figure is based on the National Physical Laboratory’s (NPL) testing of NEC’s Neoface V4 LFR algorithm deployed by the Metropolitan Police and South Wales Police, which Essex Police does not use. Instead, Essex Police has opted to use an algorithm developed by Israeli biometrics firm Corsight, whose chief privacy officer, Tony Porter, was formerly the UK’s surveillance camera commissioner until January 2021. Highlighting testing of the Corsight_003 algorithm conducted in June 2022 by the US National Institute of Standards and Technology (NIST), the EIA also claims it has “a bias differential FMR [False Match Rate] of 0.0006 overall, the lowest of any tested within NIST at the time of writing, according to the supplier”. However, looking at the NIST website, where all of the testing data is publicly shared, there is no information to support the figure cited by Corsight, or its claim to essentially have the least biased algorithm available. A separate FoI response to Big Brother Watch confirmed that, as of 16 January 2025, Essex Police had not conducted any “formal or detailed” testing of the system itself, or otherwise commissioned a third party to do so. Essex Police's lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk Jake Hurfurt, Big Brother Watch “Looking at Essex Police’s EIA, we are concerned about the force’s compliance with its duties under equality law, as the reliance on shaky evidence seriously undermines the force’s claims about how the public will be protected against algorithmic bias,” said Jake Hurfurt, head of research and investigations at Big Brother Watch. “Essex Police’s lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk. This slapdash scrutiny of their intrusive facial recognition system sets a worrying precedent. “Facial recognition is notorious for misidentifying women and people of colour, and Essex Police’s willingness to deploy the technology without testing it themselves raises serious questions about the force’s compliance with equalities law. Essex Police should immediately stop their use of facial recognition surveillance.” The need for UK police forces deploying facial recognition to consider how their use of the technology could be discriminatory was highlighted by a legal challenge brought against South Wales Police by Cardiff resident Ed Bridges. In August 2020, the UK Court of Appeal ruled that the use of LFR by the force was unlawful because the privacy violations it entailed were “not in accordance” with legally permissible restrictions on Bridges’ Article 8 privacy rights; it did not conduct an appropriate data protection impact assessment (DPIA); and it did not comply with its PSED to consider how its policies and practices could be discriminatory. The judgment specifically found that the PSED is a “duty of process and not outcome”, and requires public bodies to take reasonable steps “to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex”. Big Brother Watch said equality assessments must rely on “sufficient quality evidence” to back up the claims being made and ultimately satisfy the PSED, but that the documents obtained do not demonstrate the force has had “due regard” for equalities. Academic Karen Yeung, an interdisciplinary professor at Birmingham Law School and School of Computer Science, told Computer Weekly that, in her view, the EIA is “clearly inadequate”. She also criticised the document for being “incoherent”, failing to look at the systemic equalities impacts of the technology, and relying exclusively on testing of entirely different software algorithms used by other police forces trained on different populations: “This does not, in my view, fulfil the requirements of the public sector equality duty. It is a document produced from a cut-and-paste exercise from the largely irrelevant material produced by others.” Computer Weekly contacted Essex Police about every aspect of the story. “We take our responsibility to meet our public sector equality duty very seriously, and there is a contractual requirement on our LFR partner to ensure sufficient testing has taken place to ensure the software meets the specification and performance outlined in the tender process,” said a spokesperson. “There have been more than 50 deployments of our LFR vans, scanning 1.7 million faces, which have led to more than 200 positive alerts, and nearly 70 arrests. “To date, there has been one false positive, which, when reviewed, was established to be as a result of a low-quality photo uploaded onto the watchlist and not the result of bias issues with the technology. This did not lead to an arrest or any other unlawful action because of the procedures in place to verify all alerts. This issue has been resolved to ensure it does not occur again.” The spokesperson added that the force is also committed to carrying out further assessment of the software and algorithms, with the evaluation of deployments and results being subject to an independent academic review. “As part of this, we have carried out, and continue to do so, testing and evaluation activity in conjunction with the University of Cambridge. The NPL have recently agreed to carry out further independent testing, which will take place over the summer. The company have also achieved an ISO 42001 certification,” said the spokesperson. “We are also liaising with other technical specialists regarding further testing and evaluation activity.” However, the force did not comment on why it was relying on the testing of a completely different algorithm in its EIA, or why it had not conducted or otherwise commissioned its own testing before operationally deploying the technology in the field. Computer Weekly followed up Essex Police for clarification on when the testing with Cambridge began, as this is not mentioned in the EIA, but received no response by time of publication. Although Essex Police and Corsight claim the facial recognition algorithm in use has “a bias differential FMR of 0.0006 overall, the lowest of any tested within NIST at the time of writing”, there is no publicly available data on NIST’s website to support this claim. Drilling down into the demographic split of false positive rates shows, for example, that there is a factor of 100 more false positives in West African women than for Eastern European men. While this is an improvement on the previous two algorithms submitted for testing by Corsight, other publicly available data held by NIST undermines Essex Police’s claim in the EIA that the “algorithm is identified by NIST as having the lowest bias variance between demographics”. Looking at another metric held by NIST – FMR Max/Min, which refers to the ratio between demographic groups that give the most and least false positives – it essentially represents how inequitable the error rates are across different age groups, sexes and ethnicities. In this instance, smaller values represent better performance, with the ratio being an estimate of how many times more false positives can be expected in one group over another. According to the NIST webpage for “demographic effects” in facial recognition algorithms, the Corsight algorithm has an FMR Max/Min of 113(22), meaning there are at least 21 algorithms that display less bias. For comparison, the least biased algorithm according to NIST results belongs to a firm called Idemia, which has an FMR Max/Min of 5(1). However, like Corsight, the highest false match rate for Idemia’s algorithm was for older West African women. Computer Weekly understands this is a common problem with many of the facial recognition algorithms NIST tests because this group is not typically well-represented in the underlying training data of most firms. Computer Weekly also confirmed with NIST that the FMR metric cited by Corsight relates to one-to-one verification, rather than the one-to-many situation police forces would be using it in. This is a key distinction, because if 1,000 people are enrolled in a facial recognition system that was built on one-to-one verification, then the false positive rate will be 1,000 times larger than the metrics held by NIST for FMR testing. “If a developer implements 1:N (one-to-many) search as N 1:1 comparisons, then the likelihood of a false positive from a search is expected to be proportional to the false match for the 1:1 comparison algorithm,” said NIST scientist Patrick Grother. “Some developers do not implement 1:N search that way.” Commenting on the contrast between this testing methodology and the practical scenarios the tech will be deployed in, Birmingham Law School’s Yeung said one-to-one is for use in stable environments to provide admission to spaces with limited access, such as airport passport gates, where only one person’s biometric data is scrutinised at a time. “One-to-many is entirely different – it’s an entirely different process, an entirely different technical challenge, and therefore cannot typically achieve equivalent levels of accuracy,” she said. Computer Weekly contacted Corsight about every aspect of the story related to its algorithmic testing, including where the “0.0006” figure is drawn from and its various claims to have the “least biased” algorithm. “The facts presented in your article are partial, manipulated and misleading,” said a company spokesperson. “Corsight AI’s algorithms have been tested by numerous entities, including NIST, and have been proven to be the least biased in the industry in terms of gender and ethnicity. This is a major factor for our commercial and government clients.” However, Corsight was either unable or unwilling to specify which facts are “partial, manipulated or misleading” in response to Computer Weekly’s request for clarification. Computer Weekly also contacted Corsight about whether it has done any further testing by running N one-to-one comparisons, and whether it has changed the system’s threshold settings for detecting a match to suppress the false positive rate, but received no response on these points. While most facial recognition developers submit their algorithms to NIST for testing on an annual or bi-annual basis, Corsight last submitted an algorithm in mid-2022. Computer Weekly contacted Corsight about why this was the case, given that most algorithms in NIST testing show continuous improvement with each submission, but again received no response on this point. The Essex Police EIA also highlights testing of the Corsight algorithm conducted in 2022 by the Department of Homeland Security (DHS), claiming it demonstrated “Corsight’s capability to perform equally across all demographics”. However, Big Brother Watch’s Hurfurt highlighted that the DHS study focused on bias in the context of true positives, and did not assess the algorithm for inequality in false positives. This is a key distinction for the testing of LFR systems, as false negatives where the system fails to recognise someone will likely not lead to incorrect stops or other adverse effects, whereas a false positive where the system confuses two people could have more severe consequences for an individual. The DHS itself also publicly came out against Corsight’s representation of the test results, after the firm claimed in subsequent marketing materials that “no matter how you look at it, Corsight is ranked #1. #1 in overall recognition, #1 in dark skin, #1 in Asian, #1 in female”. Speaking with IVPM in August 2023, DHS said: “We do not know what this claim, being ‘#1’ is referring to.” The department added that the rules of the testing required companies to get their claims cleared through DHS to ensure they do not misrepresent their performance. In its breakdown of the test results, IVPM noted that systems of multiple other manufacturers achieved similar results to Corsight. The company did not respond to a request for comment about the DHS testing. Computer Weekly contacted Essex Police about all the issues raised around Corsight testing, but received no direct response to these points from the force. While Essex Police claimed in its EIA that it “also sought advice from their own independent Data and Digital Ethics Committee in relation to their use of LFR generally”, meeting minutes obtained via FoI rules show that key impacts had not been considered. For example, when one panel member questioned how LFR deployments could affect community events or protests, and how the force could avoid the technology having a “chilling presence”, the officer present (whose name has been redacted from the document) said “that’s a pretty good point, actually”, adding that he had “made a note” to consider this going forward. The EIA itself also makes no mention of community events or protests, and does not specify how different groups could be affected by these different deployment scenarios. Elsewhere in the EIA, Essex Police claims that the system is likely to have minimal impact across age, gender and race, citing the 0.6 threshold setting, as well as NIST and DHS testing, as ways of achieving “equitability” across different demographics. Again, this threshold setting relates to a completely different system used by the Met and South Wales Police. For each protected characteristic, the EIA has a section on “mitigating” actions that can be taken to reduce adverse impacts. While the “ethnicity” section again highlights the National Physical Laboratory’s testing of a completely different algorithm, most other sections note that “any watchlist created will be done so as close to the deployment as possible, therefore hoping to ensure the most accurate and up-to-date images of persons being added are uploaded”. However, Yeung noted that the EIA makes no mention of the specific watchlist creation criteria beyond high-level “categories of images” that can be included, and the claimed equality impacts of that process. For example, it does not consider how people from certain ethnic minority or religious backgrounds could be disproportionally impacted as a result of their over-representation in police databases, or the issue of unlawful custody image retention whereby the Home Office is continuing to hold millions of custody images illegally in the Police National Database (PND). While the ethics panel meeting minutes offer greater insight into how Essex Police is approaching watchlist creation, the custody image retention issue was also not mentioned. Responding to Computer Weekly’s questions about the meeting minutes and the lack of scrutiny of key issues related to UK police LFR deployments, an Essex Police spokesperson said: “Our polices and processes around the use of live facial recognition have been carefully scrutinised through a thorough ethics panel.” Instead, the officer present explained how watchlists and deployments are decided based on the “intelligence case”, which then has to be justified as both proportionate and necessary. On the “Southend intelligence case”, the officer said deploying in the town centre would be permissible because “that’s where the most footfall is, the most opportunity to locate outstanding suspects”. They added: “The watchlist [then] has to be justified by the key elements, the policing purpose. Everything has to be proportionate and strictly necessary to be able to deploy… If the commander in Southend said, ‘I want to put everyone that’s wanted for shoplifting across Essex on the watchlist for Southend’, the answer would be no, because is it necessary? Probably not. Is it proportionate? I don’t think it is. Would it be proportionate to have individuals who are outstanding for shoplifting from the Southend area? Yes, because it’s local.” However, the officer also said that, on most occasions, the systems would be deployed to catch “our most serious offenders”, as this would be easier to justify from a public perception point of view. They added that, during the summer, it would be easier to justify deployments because of the seasonal population increase in Southend. “We know that there is a general increase in violence during those months. So, we don’t need to go down to the weeds to specifically look at grievous bodily harm [GBH] or murder or rape, because they’re not necessarily fuelled by a spike in terms of seasonality, for example,” they said. “However, we know that because the general population increases significantly, the level of violence increases significantly, which would justify that I could put those serious crimes on that watchlist.” Commenting on the responses given to the ethics panel, Yeung said they “failed entirely to provide me with confidence that their proposed deployments will have the required legal safeguards in place”. According to the Court of Appeal judgment against South Wales Police in the Bridges case, the force’s facial recognition policy contained “fundamental deficiencies” in relation to the “who” and “where” question of LFR. “In relation to both of those questions, too much discretion is currently left to individual police officers,” it said. “It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFR [automated facial recognition] can be deployed.” Yeung added: “The same applies to these responses of Essex Police force, failing to adequately answer the ‘who’ and ‘where’ questions concerning their proposed facial recognition deployments. “Worse still, the court stated that a police force’s local policies can only satisfy the requirements that the privacy interventions arising from use of LFR are ‘prescribed by law’ if they are published. The documents were obtained by Big Brother Watch through freedom of information requests, strongly suggesting that these even these basic legal safeguards are not being met.” Yeung added that South Wales Police’s use of the technology was found to be unlawful in the Bridges case because there was excessive discretion left in the hands of individual police officers, allowing undue opportunities for arbitrary decision-making and abuses of power. Every decision ... must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity. I don’t see any of that happening Karen Yeung, Birmingham Law School “Every decision – where you will deploy, whose face is placed on the watchlist and why, and the duration of deployment – must be specified in advance, documented and justified in accordance with the tests of proportionality and necessity,” she said. “I don’t see any of that happening. There are simply vague claims that ‘we’ll make sure we apply the legal test’, but how? They just offer unsubstantiated promises that ‘we will abide by the law’ without specifying how they will do so by meeting specific legal requirements.” Yeung further added these documents indicate that the police force is not looking for specific people wanted for serious crimes, but setting up dragnets for a wide variety of ‘wanted’ individuals, including those wanted for non-serious crimes such as shoplifting. “There are many platitudes about being ethical, but there’s nothing concrete indicating how they propose to meet the legal tests of necessity and proportionality,” she said. “In liberal democratic societies, every single decision about an individual by the police made without their consent must be justified in accordance with law. That means that the police must be able to justify and defend the reasons why every single person whose face is uploaded to the facial recognition watchlist meets the legal test, based on their specific operational purpose.” Yeung concluded that, assuming they can do this, police must also consider the equality impacts of their actions, and how different groups are likely to be affected by their practical deployments: “I don’t see any of that.” In response to the concerns raised around watchlist creation, proportionality and necessity, an Essex Police spokesperson said: “The watchlists for each deployment are created to identify specific people wanted for specific crimes and to enforce orders. To date, we have focused on the types of offences which cause the most harm to our communities, including our hardworking businesses. “This includes violent crime, drugs, sexual offences and thefts from shops. As a result of our deployments, we have arrested people wanted in connection with attempted murder investigations, high-risk domestic abuse cases, GBH, sexual assault, drug supply and aggravated burglary offences. We have also been able to progress investigations and move closer to securing justice for victims.” Read more about police data and technology Metropolitan Police to deploy permanent facial recognition tech in Croydon: The Met is set to deploy permanent live facial recognition cameras on street furniture in Croydon from summer 2025, but local councillors say the decision – which has taken place with no community input – will further contribute the over-policing of Black communities. UK MoJ crime prediction algorithms raise serious concerns: The Ministry of Justice is using one algorithm to predict people’s risk of reoffending and another to predict who will commit murder, but critics say the profiling in these systems raises ‘serious concerns’ over racism, classism and data inaccuracies. UK law enforcement data adequacy at risk: The UK government says reforms to police data protection rules will help to simplify law enforcement data processing, but critics argue the changes will lower protection to the point where the UK risks losing its European data adequacy.
    0 Комментарии 0 Поделились
  • A Common Group of Antidepressants Could Suppress Tumor Growth Across Various Cancer Types

    Targeting the immune system to fight cancer has been in the works for over a decade, and thanks to its precise, personalized approach, it's poised to shape the future of oncology. As our understanding of how immunotherapy can be used against cancer grows, scientists are now reconsidering existing drugs, particularly those that affect the immune system, for their potential role in cancer treatment.Alongside well-established medications like aspirin, showing potential to help the immune system combat cancer, researchers are now turning their attention to antidepressants — and the results are looking promising.A team from UCLA recently published a study in Cell showing how SSRIs, a widely prescribed class of antidepressants, can help the immune system suppress tumor growth across various cancer types. So instead of developing entirely new drugs, could the key lie in repurposing ones we already have?“These drugs have been widely and safely used to treat depression for decades, so repurposing them for cancer would be a lot easier than developing an entirely new therapy,” said senior study author Lili Yang, a member of the Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research at UCLA, in a press statement.The Role of AntidepressantsSSRIs, or selective serotonin reuptake inhibitors, work by increasing levels of serotonin, a neurotransmitter associated with mood and focus, and by blocking the serotonin transporter, which typically regulates how much serotonin is available outside our cells. In people with depression, serotonin levels in the brain drop significantly — a problem that SSRIs like fluoxetine, citalopram, and sertralinehelp to address.But serotonin isn’t just about mood. Only about 5 percent of the body’s serotonin is made in the brain. The rest acts as a signaling molecule in many essential bodily functions, including digestion — and, as recent research suggests, immune system regulation.While earlier lab studies hinted that serotonin might help stimulate T-cells, the immune system’s front-line soldiers, its precise role and potential in immunoregulation remained unclear. That is, until now.Antidepressants and Anti-Tumor PotentialBefore studying SSRIs, the UCLA team had explored another class of antidepressants called MAO inhibitors, which also increased serotonin levels by blocking an enzyme known as MAO-A. These drugs showed anti-tumor potential, but due to their higher risk of side effects, researchers shifted their focus to SSRIs.“SERT made for an especially attractive target because the drugs that act on it — SSRIs — are widely used with minimal side effects,” said Bo Li, the study’s first author, in the news release. By using SSRIs to boost serotonin availability, researchers aimed to outmaneuver one of cancer’s suggested strategies: depriving immune cells of the serotonin they need to function effectively.The results were encouraging. In both mouse and human tumor models of melanoma, breast, prostate, colon, and bladder cancers, SSRI treatment shrank tumors by over 50 precent. The key, according to Yang, was “increasing their access to serotonin,” which in turn enhanced the T-cells' ability to attack.Combining with Existing Cancer TreatmentsThe team also tested whether combining SSRIs with existing cancer treatments could offer even better results. The answer was yes. In follow-up experiments, all mice with melanoma or colon cancer that received both an SSRI and immune checkpoint blockadetherapy, a treatment designed to overcome the immune-suppressing nature of tumors, experienced significantly reduced tumor sizes.“Immune checkpoint blockades are effective in fewer than 25 percent of patients,” said study co-author James Elsten-Brown in the press release. “If a safe, widely available drug like an SSRI could make these therapies more effective, it would be hugely impactful.”Using therapies already deemed safe means fewer regulatory hurdles and faster clinical use.“Studies estimate the bench-to-bedside pipeline for new cancer therapies costs an average of billion,” Yang said. “When you compare this to the estimated million cost to repurpose FDA-approved drugs, it’s clear why this approach has so much potential.”This article is not offering medical advice and should be used for informational purposes only.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:UCLA Broad Stem Cell Research Center: Drug commonly used as antidepressant helps fight cancer in miceHaving worked as a biomedical research assistant in labs across three countries, Jenny excels at translating complex scientific concepts – ranging from medical breakthroughs and pharmacological discoveries to the latest in nutrition – into engaging, accessible content. Her interests extend to topics such as human evolution, psychology, and quirky animal stories. When she’s not immersed in a popular science book, you’ll find her catching waves or cruising around Vancouver Island on her longboard.
    #common #group #antidepressants #could #suppress
    A Common Group of Antidepressants Could Suppress Tumor Growth Across Various Cancer Types
    Targeting the immune system to fight cancer has been in the works for over a decade, and thanks to its precise, personalized approach, it's poised to shape the future of oncology. As our understanding of how immunotherapy can be used against cancer grows, scientists are now reconsidering existing drugs, particularly those that affect the immune system, for their potential role in cancer treatment.Alongside well-established medications like aspirin, showing potential to help the immune system combat cancer, researchers are now turning their attention to antidepressants — and the results are looking promising.A team from UCLA recently published a study in Cell showing how SSRIs, a widely prescribed class of antidepressants, can help the immune system suppress tumor growth across various cancer types. So instead of developing entirely new drugs, could the key lie in repurposing ones we already have?“These drugs have been widely and safely used to treat depression for decades, so repurposing them for cancer would be a lot easier than developing an entirely new therapy,” said senior study author Lili Yang, a member of the Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research at UCLA, in a press statement.The Role of AntidepressantsSSRIs, or selective serotonin reuptake inhibitors, work by increasing levels of serotonin, a neurotransmitter associated with mood and focus, and by blocking the serotonin transporter, which typically regulates how much serotonin is available outside our cells. In people with depression, serotonin levels in the brain drop significantly — a problem that SSRIs like fluoxetine, citalopram, and sertralinehelp to address.But serotonin isn’t just about mood. Only about 5 percent of the body’s serotonin is made in the brain. The rest acts as a signaling molecule in many essential bodily functions, including digestion — and, as recent research suggests, immune system regulation.While earlier lab studies hinted that serotonin might help stimulate T-cells, the immune system’s front-line soldiers, its precise role and potential in immunoregulation remained unclear. That is, until now.Antidepressants and Anti-Tumor PotentialBefore studying SSRIs, the UCLA team had explored another class of antidepressants called MAO inhibitors, which also increased serotonin levels by blocking an enzyme known as MAO-A. These drugs showed anti-tumor potential, but due to their higher risk of side effects, researchers shifted their focus to SSRIs.“SERT made for an especially attractive target because the drugs that act on it — SSRIs — are widely used with minimal side effects,” said Bo Li, the study’s first author, in the news release. By using SSRIs to boost serotonin availability, researchers aimed to outmaneuver one of cancer’s suggested strategies: depriving immune cells of the serotonin they need to function effectively.The results were encouraging. In both mouse and human tumor models of melanoma, breast, prostate, colon, and bladder cancers, SSRI treatment shrank tumors by over 50 precent. The key, according to Yang, was “increasing their access to serotonin,” which in turn enhanced the T-cells' ability to attack.Combining with Existing Cancer TreatmentsThe team also tested whether combining SSRIs with existing cancer treatments could offer even better results. The answer was yes. In follow-up experiments, all mice with melanoma or colon cancer that received both an SSRI and immune checkpoint blockadetherapy, a treatment designed to overcome the immune-suppressing nature of tumors, experienced significantly reduced tumor sizes.“Immune checkpoint blockades are effective in fewer than 25 percent of patients,” said study co-author James Elsten-Brown in the press release. “If a safe, widely available drug like an SSRI could make these therapies more effective, it would be hugely impactful.”Using therapies already deemed safe means fewer regulatory hurdles and faster clinical use.“Studies estimate the bench-to-bedside pipeline for new cancer therapies costs an average of billion,” Yang said. “When you compare this to the estimated million cost to repurpose FDA-approved drugs, it’s clear why this approach has so much potential.”This article is not offering medical advice and should be used for informational purposes only.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:UCLA Broad Stem Cell Research Center: Drug commonly used as antidepressant helps fight cancer in miceHaving worked as a biomedical research assistant in labs across three countries, Jenny excels at translating complex scientific concepts – ranging from medical breakthroughs and pharmacological discoveries to the latest in nutrition – into engaging, accessible content. Her interests extend to topics such as human evolution, psychology, and quirky animal stories. When she’s not immersed in a popular science book, you’ll find her catching waves or cruising around Vancouver Island on her longboard. #common #group #antidepressants #could #suppress
    WWW.DISCOVERMAGAZINE.COM
    A Common Group of Antidepressants Could Suppress Tumor Growth Across Various Cancer Types
    Targeting the immune system to fight cancer has been in the works for over a decade, and thanks to its precise, personalized approach, it's poised to shape the future of oncology. As our understanding of how immunotherapy can be used against cancer grows, scientists are now reconsidering existing drugs, particularly those that affect the immune system, for their potential role in cancer treatment.Alongside well-established medications like aspirin, showing potential to help the immune system combat cancer, researchers are now turning their attention to antidepressants — and the results are looking promising.A team from UCLA recently published a study in Cell showing how SSRIs, a widely prescribed class of antidepressants, can help the immune system suppress tumor growth across various cancer types. So instead of developing entirely new drugs, could the key lie in repurposing ones we already have?“These drugs have been widely and safely used to treat depression for decades, so repurposing them for cancer would be a lot easier than developing an entirely new therapy,” said senior study author Lili Yang, a member of the Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research at UCLA, in a press statement.The Role of AntidepressantsSSRIs, or selective serotonin reuptake inhibitors, work by increasing levels of serotonin, a neurotransmitter associated with mood and focus, and by blocking the serotonin transporter (SERT), which typically regulates how much serotonin is available outside our cells. In people with depression, serotonin levels in the brain drop significantly — a problem that SSRIs like fluoxetine (Prozac), citalopram (Celexa), and sertraline (Zoloft) help to address.But serotonin isn’t just about mood. Only about 5 percent of the body’s serotonin is made in the brain. The rest acts as a signaling molecule in many essential bodily functions, including digestion — and, as recent research suggests, immune system regulation.While earlier lab studies hinted that serotonin might help stimulate T-cells, the immune system’s front-line soldiers, its precise role and potential in immunoregulation remained unclear. That is, until now.Antidepressants and Anti-Tumor PotentialBefore studying SSRIs, the UCLA team had explored another class of antidepressants called MAO inhibitors (MAOIs), which also increased serotonin levels by blocking an enzyme known as MAO-A. These drugs showed anti-tumor potential, but due to their higher risk of side effects, researchers shifted their focus to SSRIs.“SERT made for an especially attractive target because the drugs that act on it — SSRIs — are widely used with minimal side effects,” said Bo Li, the study’s first author, in the news release. By using SSRIs to boost serotonin availability, researchers aimed to outmaneuver one of cancer’s suggested strategies: depriving immune cells of the serotonin they need to function effectively.The results were encouraging. In both mouse and human tumor models of melanoma, breast, prostate, colon, and bladder cancers, SSRI treatment shrank tumors by over 50 precent. The key, according to Yang, was “increasing their access to serotonin,” which in turn enhanced the T-cells' ability to attack.Combining with Existing Cancer TreatmentsThe team also tested whether combining SSRIs with existing cancer treatments could offer even better results. The answer was yes. In follow-up experiments, all mice with melanoma or colon cancer that received both an SSRI and immune checkpoint blockade (ICB) therapy, a treatment designed to overcome the immune-suppressing nature of tumors, experienced significantly reduced tumor sizes.“Immune checkpoint blockades are effective in fewer than 25 percent of patients,” said study co-author James Elsten-Brown in the press release. “If a safe, widely available drug like an SSRI could make these therapies more effective, it would be hugely impactful.”Using therapies already deemed safe means fewer regulatory hurdles and faster clinical use.“Studies estimate the bench-to-bedside pipeline for new cancer therapies costs an average of $1.5 billion,” Yang said. “When you compare this to the estimated $300 million cost to repurpose FDA-approved drugs, it’s clear why this approach has so much potential.”This article is not offering medical advice and should be used for informational purposes only.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:UCLA Broad Stem Cell Research Center: Drug commonly used as antidepressant helps fight cancer in miceHaving worked as a biomedical research assistant in labs across three countries, Jenny excels at translating complex scientific concepts – ranging from medical breakthroughs and pharmacological discoveries to the latest in nutrition – into engaging, accessible content. Her interests extend to topics such as human evolution, psychology, and quirky animal stories. When she’s not immersed in a popular science book, you’ll find her catching waves or cruising around Vancouver Island on her longboard.
    0 Комментарии 0 Поделились
  • Sleep Aids Can Be Uneven and Expensive, Leaving Anxious Patients Lacking

    May 21, 20255 min readOne Woman’s Pharmaceutical Journey to a Good Night’s SleepWhen insomnia took hold of this journalist, she relied on her science reporting to find a medication thatworkedBy Rachel Nuwer Malte MuellerThis Nature Outlook is editorially independent, produced with financial support from Avadel.I never had issues with sleep until the COVID-19 pandemic. A couple of months into lockdown in 2020, I found myself unable to fall or stay asleep. My worries played on an unstoppable loop, and the longer I lay in bed, the more anxious I became about not sleeping. This vicious cycle left me exhausted. After a few months, I became depressed. It was time to get professional help.This was the start of a years-long odyssey to find an effective sleep aid without negative side effects. The first medication I tried was 50 milligrams of an antihistamine called hydroxyzine, prescribed to me after a five-minute telehealth appointment. It effectively knocked me out, but it left me feeling so groggy the next morning that I struggled to get out of bed. I stopped taking it.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.I lacked the energy to meet with a physician again, so I went back to relying on a grab bag of pills. These included over-the-counter melatonin, a hormone used to treat sleep problems; diphenhydramine, an antihistamine and sedative commonly sold as Benadryl; my husband’s gabapentin, which is prescribed to treat epilepsy and nerve pain but is commonly given as an anti-anxiety sleep aid; and tablets of questionable provenance that were labelled as alprazolam, used to treat anxiety conditions, which I acquired on a pre-pandemic trip to Sri Lanka. I rotated through these remedies in an attempt to not become overly reliant on any one of them.Last year, my struggle to sleep markedly worsened. Stress still seemed to be in limitless supply. My identity is wrapped up in my job as a science journalist, but as the media industry continues to collapse in on itself, it is becoming more and more difficult to make ends meet. At night, my chest would tighten as I tried to imagine a viable future in my chosen career. Layered on top of that were the stressors of the 2024 US presidential election and interpersonal drama with my increasingly conservative father.I found a sympathetic primary-care provider in the form of a physician’s assistant— a licensed medical professional who, in some states, can prescribe medications but isn’t actually a physician. She listened to my problems and asked me questions about my life. At the end of the appointment, she agreed that I should try the antidepressant bupropion. I was still having trouble sleeping, however, and my night-time anxiety spiked following the election. “Sadly, we are getting a lot of these messages,” my PA said when I told her about this. We added buspirone, an anti-anxiety medication, to my daily regimen. I immediately started sleeping better. But buspirone left me feeling deflated, numb and unmotivated during the day. My PA suggested that, as long as I didn’t develop serious depressive thoughts, I should stick it out for a month to give my body time to adjust.I agreed to give it more time. Then, about three weeks in, I woke up one night from a nightmare and felt something crawling through my hair. Then, I saw a flash of light, as though someone was standing over me taking a photograph. I quickly realized that these had been hallucinations that occurred in the transition from sleep to wakefulness. Nothing like this had ever happened to me before, and the vividness of the experience was extremely disconcerting. The next day, I learnt that disturbed sleep is a side effect of buspirone. My PA agreed that I should stop the drug.But, I still needed help to fall asleep. The obvious choice would have been benzodiazepines or ‘Z-drugs’ — classes of medications that have a sedative effect. But these drugs can also lead to dependency. Worryingly, too, a study in mice, published this year, found that one of these drugs, zolpidem, might interfere with the brain’s ability to clear waste, including toxic molecules associated with Alzheimer’s disease. These results still need to be replicated in humans, but they do mirror findings from at least one observational study. I told my PA I wanted to steer clear of these medications.Through reporting for another story on sleep medication for this Nature Outlook, I was cautiously excited to learn about a new class of insomnia medications known as dual orexin receptor antagonistdrugs. These work by blocking a molecule that promotes wakefulness, and they have fewer side effects and a lower risk of dependence compared with other sleep aids. My PA was familiar with one of them, Belsomra, and said I could try it.It took almost three weeks for me to receive the prescription, and my insurance would not cover it. There are no generic DORA drugs. Thirty daily tablets of Belsomra was going to cost me an astronomical USBut, I was desperate to get some sleep and my pharmacist was able to find a coupon that knocked off the bill. I sucked it up and paid.As I write this, I’ve been taking Belsomra on and off for a month. When it works well, I fall asleep quickly and soundly, and wake up feeling clear-headed and rested. About one-quarter of the time, however, my anxiety manages to cut through the medication and I struggle to fall asleep. My PA said that I can try doubling my dose to the maximum 20 milligrams, by taking two tablets each night. But I haven’t tried this yet, because I’m aware that each pill I pop before bed is about the same price as ordering a fancy cocktail.I held out hope that my health-insurance company, one of the largest in the United States, would eventually agree to cover Belsomra. The initial rejection note that the company sent included a list of eight cheaper, generic Z-drugs and benzodiazepines — all have a risk of dependency — that they required me to try first. My PA and I worked through the list of prescriptions in an effort to make a case that none of them were suitable. And finally, in late March, we had success: the insurance company agreed to pay for Belsomra for the next year. Even with that coverage, however, I’m still required to pay a steep for a month’s supply of the drug, which my pharmacist confirmed is normal for this medication. So, until a generic DORA drug comes out, this particular sleep solution will unfortunately be available only for those who have enough extra income to be able to pay for the privilege.I’m certainly aware that my trials and tribulations with insomnia have benefited from a tremendous amount of privilege. I have found an understanding and supportive PA, and my insurance pays for my appointments with her. I live in a country where these medications are available — DORA drugs are not available everywhere yet and I have enough disposable income to pay hundreds of dollars in the interest of self-care. I also have a level of education, and a job as a science journalist, that allows me to access and comprehend the latest health-care findings, and speak directly with scientists at the forefront of research. I can only imagine the collective exhaustion and frustration of the hundreds of millions of people around the world who are not in my position, and who are struggling on their own to get a good night’s sleep.It should not be like this. Medical professionals should be the ones calling the shots on what care their patients need — not insurance companies that are focused on ringing out as much profit as possible from clients who are already paying exorbitant premiums. However, until the system changes, millions of people will continue to take the same tortuous path that I have been forced onto, and resort to medications that might have harmful long-term effects while the most advanced therapies remain tantalizingly out of financial reach.
    #sleep #aids #can #uneven #expensive
    Sleep Aids Can Be Uneven and Expensive, Leaving Anxious Patients Lacking
    May 21, 20255 min readOne Woman’s Pharmaceutical Journey to a Good Night’s SleepWhen insomnia took hold of this journalist, she relied on her science reporting to find a medication thatworkedBy Rachel Nuwer Malte MuellerThis Nature Outlook is editorially independent, produced with financial support from Avadel.I never had issues with sleep until the COVID-19 pandemic. A couple of months into lockdown in 2020, I found myself unable to fall or stay asleep. My worries played on an unstoppable loop, and the longer I lay in bed, the more anxious I became about not sleeping. This vicious cycle left me exhausted. After a few months, I became depressed. It was time to get professional help.This was the start of a years-long odyssey to find an effective sleep aid without negative side effects. The first medication I tried was 50 milligrams of an antihistamine called hydroxyzine, prescribed to me after a five-minute telehealth appointment. It effectively knocked me out, but it left me feeling so groggy the next morning that I struggled to get out of bed. I stopped taking it.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.I lacked the energy to meet with a physician again, so I went back to relying on a grab bag of pills. These included over-the-counter melatonin, a hormone used to treat sleep problems; diphenhydramine, an antihistamine and sedative commonly sold as Benadryl; my husband’s gabapentin, which is prescribed to treat epilepsy and nerve pain but is commonly given as an anti-anxiety sleep aid; and tablets of questionable provenance that were labelled as alprazolam, used to treat anxiety conditions, which I acquired on a pre-pandemic trip to Sri Lanka. I rotated through these remedies in an attempt to not become overly reliant on any one of them.Last year, my struggle to sleep markedly worsened. Stress still seemed to be in limitless supply. My identity is wrapped up in my job as a science journalist, but as the media industry continues to collapse in on itself, it is becoming more and more difficult to make ends meet. At night, my chest would tighten as I tried to imagine a viable future in my chosen career. Layered on top of that were the stressors of the 2024 US presidential election and interpersonal drama with my increasingly conservative father.I found a sympathetic primary-care provider in the form of a physician’s assistant— a licensed medical professional who, in some states, can prescribe medications but isn’t actually a physician. She listened to my problems and asked me questions about my life. At the end of the appointment, she agreed that I should try the antidepressant bupropion. I was still having trouble sleeping, however, and my night-time anxiety spiked following the election. “Sadly, we are getting a lot of these messages,” my PA said when I told her about this. We added buspirone, an anti-anxiety medication, to my daily regimen. I immediately started sleeping better. But buspirone left me feeling deflated, numb and unmotivated during the day. My PA suggested that, as long as I didn’t develop serious depressive thoughts, I should stick it out for a month to give my body time to adjust.I agreed to give it more time. Then, about three weeks in, I woke up one night from a nightmare and felt something crawling through my hair. Then, I saw a flash of light, as though someone was standing over me taking a photograph. I quickly realized that these had been hallucinations that occurred in the transition from sleep to wakefulness. Nothing like this had ever happened to me before, and the vividness of the experience was extremely disconcerting. The next day, I learnt that disturbed sleep is a side effect of buspirone. My PA agreed that I should stop the drug.But, I still needed help to fall asleep. The obvious choice would have been benzodiazepines or ‘Z-drugs’ — classes of medications that have a sedative effect. But these drugs can also lead to dependency. Worryingly, too, a study in mice, published this year, found that one of these drugs, zolpidem, might interfere with the brain’s ability to clear waste, including toxic molecules associated with Alzheimer’s disease. These results still need to be replicated in humans, but they do mirror findings from at least one observational study. I told my PA I wanted to steer clear of these medications.Through reporting for another story on sleep medication for this Nature Outlook, I was cautiously excited to learn about a new class of insomnia medications known as dual orexin receptor antagonistdrugs. These work by blocking a molecule that promotes wakefulness, and they have fewer side effects and a lower risk of dependence compared with other sleep aids. My PA was familiar with one of them, Belsomra, and said I could try it.It took almost three weeks for me to receive the prescription, and my insurance would not cover it. There are no generic DORA drugs. Thirty daily tablets of Belsomra was going to cost me an astronomical USBut, I was desperate to get some sleep and my pharmacist was able to find a coupon that knocked off the bill. I sucked it up and paid.As I write this, I’ve been taking Belsomra on and off for a month. When it works well, I fall asleep quickly and soundly, and wake up feeling clear-headed and rested. About one-quarter of the time, however, my anxiety manages to cut through the medication and I struggle to fall asleep. My PA said that I can try doubling my dose to the maximum 20 milligrams, by taking two tablets each night. But I haven’t tried this yet, because I’m aware that each pill I pop before bed is about the same price as ordering a fancy cocktail.I held out hope that my health-insurance company, one of the largest in the United States, would eventually agree to cover Belsomra. The initial rejection note that the company sent included a list of eight cheaper, generic Z-drugs and benzodiazepines — all have a risk of dependency — that they required me to try first. My PA and I worked through the list of prescriptions in an effort to make a case that none of them were suitable. And finally, in late March, we had success: the insurance company agreed to pay for Belsomra for the next year. Even with that coverage, however, I’m still required to pay a steep for a month’s supply of the drug, which my pharmacist confirmed is normal for this medication. So, until a generic DORA drug comes out, this particular sleep solution will unfortunately be available only for those who have enough extra income to be able to pay for the privilege.I’m certainly aware that my trials and tribulations with insomnia have benefited from a tremendous amount of privilege. I have found an understanding and supportive PA, and my insurance pays for my appointments with her. I live in a country where these medications are available — DORA drugs are not available everywhere yet and I have enough disposable income to pay hundreds of dollars in the interest of self-care. I also have a level of education, and a job as a science journalist, that allows me to access and comprehend the latest health-care findings, and speak directly with scientists at the forefront of research. I can only imagine the collective exhaustion and frustration of the hundreds of millions of people around the world who are not in my position, and who are struggling on their own to get a good night’s sleep.It should not be like this. Medical professionals should be the ones calling the shots on what care their patients need — not insurance companies that are focused on ringing out as much profit as possible from clients who are already paying exorbitant premiums. However, until the system changes, millions of people will continue to take the same tortuous path that I have been forced onto, and resort to medications that might have harmful long-term effects while the most advanced therapies remain tantalizingly out of financial reach. #sleep #aids #can #uneven #expensive
    WWW.SCIENTIFICAMERICAN.COM
    Sleep Aids Can Be Uneven and Expensive, Leaving Anxious Patients Lacking
    May 21, 20255 min readOne Woman’s Pharmaceutical Journey to a Good Night’s SleepWhen insomnia took hold of this journalist, she relied on her science reporting to find a medication that (mostly) workedBy Rachel Nuwer Malte MuellerThis Nature Outlook is editorially independent, produced with financial support from Avadel.I never had issues with sleep until the COVID-19 pandemic. A couple of months into lockdown in 2020, I found myself unable to fall or stay asleep. My worries played on an unstoppable loop, and the longer I lay in bed, the more anxious I became about not sleeping. This vicious cycle left me exhausted. After a few months, I became depressed. It was time to get professional help.This was the start of a years-long odyssey to find an effective sleep aid without negative side effects. The first medication I tried was 50 milligrams of an antihistamine called hydroxyzine, prescribed to me after a five-minute telehealth appointment. It effectively knocked me out, but it left me feeling so groggy the next morning that I struggled to get out of bed. I stopped taking it.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.I lacked the energy to meet with a physician again, so I went back to relying on a grab bag of pills. These included over-the-counter melatonin, a hormone used to treat sleep problems; diphenhydramine, an antihistamine and sedative commonly sold as Benadryl; my husband’s gabapentin, which is prescribed to treat epilepsy and nerve pain but is commonly given as an anti-anxiety sleep aid; and tablets of questionable provenance that were labelled as alprazolam, used to treat anxiety conditions, which I acquired on a pre-pandemic trip to Sri Lanka. I rotated through these remedies in an attempt to not become overly reliant on any one of them.Last year, my struggle to sleep markedly worsened. Stress still seemed to be in limitless supply. My identity is wrapped up in my job as a science journalist, but as the media industry continues to collapse in on itself, it is becoming more and more difficult to make ends meet. At night, my chest would tighten as I tried to imagine a viable future in my chosen career. Layered on top of that were the stressors of the 2024 US presidential election and interpersonal drama with my increasingly conservative father.I found a sympathetic primary-care provider in the form of a physician’s assistant (PA) — a licensed medical professional who, in some states, can prescribe medications but isn’t actually a physician. She listened to my problems and asked me questions about my life. At the end of the appointment, she agreed that I should try the antidepressant bupropion. I was still having trouble sleeping, however, and my night-time anxiety spiked following the election. “Sadly, we are getting a lot of these messages,” my PA said when I told her about this. We added buspirone, an anti-anxiety medication, to my daily regimen. I immediately started sleeping better. But buspirone left me feeling deflated, numb and unmotivated during the day. My PA suggested that, as long as I didn’t develop serious depressive thoughts, I should stick it out for a month to give my body time to adjust.I agreed to give it more time. Then, about three weeks in, I woke up one night from a nightmare and felt something crawling through my hair. Then, I saw a flash of light, as though someone was standing over me taking a photograph. I quickly realized that these had been hallucinations that occurred in the transition from sleep to wakefulness. Nothing like this had ever happened to me before, and the vividness of the experience was extremely disconcerting. The next day, I learnt that disturbed sleep is a side effect of buspirone. My PA agreed that I should stop the drug.But, I still needed help to fall asleep. The obvious choice would have been benzodiazepines or ‘Z-drugs’ — classes of medications that have a sedative effect. But these drugs can also lead to dependency. Worryingly, too, a study in mice, published this year, found that one of these drugs, zolpidem (Ambien), might interfere with the brain’s ability to clear waste, including toxic molecules associated with Alzheimer’s disease. These results still need to be replicated in humans, but they do mirror findings from at least one observational study. I told my PA I wanted to steer clear of these medications.Through reporting for another story on sleep medication for this Nature Outlook, I was cautiously excited to learn about a new class of insomnia medications known as dual orexin receptor antagonist (DORA) drugs. These work by blocking a molecule that promotes wakefulness, and they have fewer side effects and a lower risk of dependence compared with other sleep aids. My PA was familiar with one of them, Belsomra, and said I could try it.It took almost three weeks for me to receive the prescription, and my insurance would not cover it. There are no generic DORA drugs. Thirty daily tablets of Belsomra was going to cost me an astronomical US$500. But, I was desperate to get some sleep and my pharmacist was able to find a coupon that knocked $150 off the bill. I sucked it up and paid.As I write this, I’ve been taking Belsomra on and off for a month. When it works well, I fall asleep quickly and soundly, and wake up feeling clear-headed and rested. About one-quarter of the time, however, my anxiety manages to cut through the medication and I struggle to fall asleep. My PA said that I can try doubling my dose to the maximum 20 milligrams, by taking two tablets each night. But I haven’t tried this yet, because I’m aware that each pill I pop before bed is about the same price as ordering a fancy cocktail.I held out hope that my health-insurance company, one of the largest in the United States, would eventually agree to cover Belsomra. The initial rejection note that the company sent included a list of eight cheaper, generic Z-drugs and benzodiazepines — all have a risk of dependency — that they required me to try first. My PA and I worked through the list of prescriptions in an effort to make a case that none of them were suitable. And finally, in late March, we had success: the insurance company agreed to pay for Belsomra for the next year. Even with that coverage, however, I’m still required to pay a steep $150 for a month’s supply of the drug, which my pharmacist confirmed is normal for this medication. So, until a generic DORA drug comes out, this particular sleep solution will unfortunately be available only for those who have enough extra income to be able to pay for the privilege.I’m certainly aware that my trials and tribulations with insomnia have benefited from a tremendous amount of privilege. I have found an understanding and supportive PA, and my insurance pays for my appointments with her. I live in a country where these medications are available — DORA drugs are not available everywhere yet and I have enough disposable income to pay hundreds of dollars in the interest of self-care. I also have a level of education, and a job as a science journalist, that allows me to access and comprehend the latest health-care findings, and speak directly with scientists at the forefront of research. I can only imagine the collective exhaustion and frustration of the hundreds of millions of people around the world who are not in my position, and who are struggling on their own to get a good night’s sleep.It should not be like this. Medical professionals should be the ones calling the shots on what care their patients need — not insurance companies that are focused on ringing out as much profit as possible from clients who are already paying exorbitant premiums. However, until the system changes, millions of people will continue to take the same tortuous path that I have been forced onto, and resort to medications that might have harmful long-term effects while the most advanced therapies remain tantalizingly out of financial reach.
    0 Комментарии 0 Поделились
Расширенные страницы