0 Комментарии
·0 Поделились
·26 Просмотры
Каталог
-
Buying your Nike shoes online is getting a lot more expensivewww.businessinsider.com2025-03-20T22:29:29Z Read in app Nike's strategy includes cutting online promotions. Cheng Xin/Getty Images This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now.Have an account? Nike is shifting its online marketplace to a full-price model, reducing discounts.Revenue for Nike Direct was down 10% on a currency-neutral basis in fiscal Q3.CEO Elliott Hill had previously said the company planned to cut down on promotions.Online shoppers are at the frontlines of Nike's battle against discounts.The sportswear giant is working to reposition its online marketplace, Nike Digital, to a "full-price business," the company told investors on Thursday.The move is in line with the promotional plans, or lack thereof, that CEO Elliott Hill has previously spoken about. Hill told investors on a Thursday earnings call that there were zero promotional days on Nike Digital in January and February for the North America region, compared to over 30 for the same period in 2024."We are reducing promotional days, reducing markdown rates and shifting closeout liquidation to our Nike factory stores," Matthew Friend, Nike CFO, said.It's having an impact, though. The company said it expects digital traffic to be down double digits in fiscal 2026 as a result of the decision, though it will "gradually stabilize and grow with new product launches."Nike reported $11.3 billion in Q3 revenue on Thursday, down 9% year-over-year. That was better than analysts' estimations for an 11% drop, Bloomberg reported. The period marked its first full quarter with Elliott Hill as CEO, who began his role in October.While Nike has been investing in advertising and innovation, competitors like Adidas and Hoka are gaining mindshare and market share with consumers in the lifestyle and running categories.The sportswear giant spent $1.1 billion on "demand creation," which includes brand marketing, in its most recent quarter, up 8% from this time last year. Nike aired its first Super Bowl ad in nearly 30 years during the big game on February 9.Hill has previously spoken about Nike's plans to lean into its identity as a running brand and invest more in womens sports. In February, it announced that it would partner with Kim Kardashian to launch NikeSkims, a new brand and a rare move for Nike to partner with an external existing company.Nike shares were down after the bell during Thursday's investor call. The company expects Q4 gross margins to be down between 400 and 500 basis points, including restructuring charges last year, as well as the predicted impact of tariffs on China and Mexico this year.0 Комментарии ·0 Поделились ·23 Просмотры
-
Making new mom friends is just like dating. For every great catch, there are several duds you need to weed through first.www.businessinsider.com2025-03-20T22:27:02Z Read in app SeventyFour/Getty Images This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now.Have an account? As a new mom living in a new area I felt isolated and craved adult interaction. Making friends as an adult is terrifyingly different than connecting with people at work or school.After putting myself out there, I met a lot of duds, but also made lasting connections.I remember sitting in my feeding chair at 3 a.m. one night, clutching my newborn and feeling like the loneliest person in the world. Desperate for an adult connection with someone who understood the life I was now thrust into, I tapped out a needy message to a Facebook group full of women I'd never met."I've recently moved here and I have a toddler and a week-old baby, does anyone want to meet for a coffee?" I typed out.I pressed send, tossed my phone on the bed and hoped my baby would sleep after her feed.The next morning I woke up to a message from another mom,"We recently moved here, too, and my baby is 10 days old, let's meet. Coffee tomorrow?" I was pleased and nervous. I was in a serious relationship before dating apps really took off, so I wasn't used to putting myself out there and trying to actively meet people online. At school or work I'd made friends because we were in the same place at the same time. It was easy. Outside of work I had a circle of friends I'd known for years. This situation felt terrifyingly different, but I needed company, support, and to find people going through exactly the same experience I was. So I was determined to put myself out there and give it a go.I felt like I was prepping for a dateOn the big day, I thought about every single detail from my clothes, to opening lines, and conversation starters. I told my husband where I was going, in case the other mom was a murderous psychopath, and after my eldest daughter went off to nursery school, I bundled up my baby and walked slowly to our rendezvous with a nervous pit in my stomach.When I arrived, I looked around, trying to work out which mom with a stroller was there for me, and feeling panicky.But as soon as we sat down and exchanged our slightly stilted introductions, things started to flow. It was like we'd met each other before, in another life. Conversation came easily and we quickly realized there were many similarities between us. There were only 4 days between our babies and our older children were also similar ages and starting the same sports club, at the same time. It seemed like our connection was meant to be.I came away from the date feeling positive. I'd made a friend. Since then, we've spoken regularly and seen each other every couple of weeks. Our children are friends and our families hang out together, too.Not all friendship dates are created equalOf course, for every genuine connection, it seems that there are a lot of duds out there, too. On one first meetup, the conversation with another woman was so painful that I reverted to work mode and started asking interview-style questions because it was all I could think of to do. After all, everyone likes talking about themselves.Proving my point, during another first meetup my mom date spoke about herself for the whole two hours we were together. I'm not sure she even asked my baby's name. Unsurprisingly, at the end of both of these meetings, there was no hint of a second outing.Awkwardly, one of the moms is someone I cross paths with every so often out and about. Sometimes we trade brief, insincere comments with each other, but nothing more.And this is all okay. I know that I won't be to everyone's taste and others won't be to mine. This was a lesson I learned when I was dating, and it's something that is still relevant now. It's the way of the world and something I'm constantly trying to teach my children. They don't have to be friends with everyone, but they do need to be kind. Putting myself out there, braving these mom dates has found me some of my best friends but also landed me in some seriously cringey moments. Friend dating is not for the faint-hearted, but it is definitely worth the time and effort.0 Комментарии ·0 Поделились ·24 Просмотры
-
This Is the Best Scene in Star Trek: Voyagers First Seasongizmodo.comStar Trek has always found great strength in the episodic format. Sure, the classic shows all dabbled in serialized elements, and some of them excelled in those elements the further they played with them, likeDeep Space Nine did in its back half. Theres a reason that, whenTrek was revitalized for the streaming age in a predominantly serialized form, fans bristled (and then turned to point at modern examples that leaned into bucking that trend, like Strange New Worlds, as a return to form). For generations,Trek has prided itself on that episodic nature, that you can tune in at any point in a season, in a series, watch an adventure, and get out, and youve had everything youve neededand hopefully got a killer story along the way. It might be fair for many people to say, then, that episodic storytelling is where Star Trek is at its best. But sometimes those disparate selves, even in the classic heyday of the franchise, could brush up against each other and create interesting, and occasionally frustrating, friction, and Voyager was perhaps one of the greatest examples of that in the 90s ouevre. Its broader premise of a ship and crew stranded on the other side of the galaxy, 70 years travel away from Earth, creates fascinating questions that thrive in serialized elementsthe impact on the crew and their relationship with each other, the scarcity of resources, the very act of sustaining a starship in a landscape where technology and attitudes might be radically different to what is known in Federation space. But it was also a show about getting in, more often than not in the early days scanning an anomaly of the week, and getting out, just in time for it all to happen again next time. Even with those serialized elements hanging over its scenario and setting, Voyager,perhaps even more thanTNG andDS9, was staunch in its championing of the episodic format thatTrek had always embracedeven if it ultimately meant its a show where quality could veer wildly from week to week. Sometimes, however, it too could have its cake and eat it, like it did 30 years ago today with the broadcast of Prime Factors, the ninth episode of its first season. Paramount The episode at large has an intriguing premise.Voyager finds itself crossing paths with an amicable advanced civilization, the Sikarians, who crave pleasure, and rejoice at the opportunity to lavish strange new travelers with gifts and samples of their idyllic society. But when the crew discovers the Sikarians have space-folding transporter technology that could either significantly reduce their journey home, or eliminate it entirelyas well as strict laws that forbid the sharing of such technology, not unlike an equivalent of the Prime Directivefriction begins to emerge, not just between Voyager and Sikarian leadership, but between parties onVoyager itself and elements of Sikarian society who think a deal could be made to trade for the technology regardless of their leaders wishes. This all climaxes when, asVoyager prepares to leave the Sikarians behind, a group of the crew decide to go rogue and make the trade: Voyagers library, full of new stories the Sikarians crave, in exchange for a sample of the transporter device. At first, the ideological divide is unsurprising; the effort is spearheaded by BElanna Torres and a group of other ex-Maquis crew, who protest that Janeways Starfleet standards are getting in the way of a chance to get home. But they and the audience alike are surprised when they are aided in the trade by Tuvok, Voyagers staunchest rules-stickler and Captain Janeways closest confidant.But again, this is an episodic story, and its nine episodes into Voyagers journey. Theyrenot going to get home, and Prime Factors knows it, but it plays with the idea. Tuvok makes the trade, but the tech doesnt fully integrate into Voyagers systems, and it nearly destroys the ship in the process of trying to use it. Things dont just go bad, they go about as near to catastrophic as they could be. Thats not surprising. But what is, is whats next: an absolutely incredible scene, when Janeway orders Tuvok and Torres into her office to see who claims responsibility for disobeying her orders. First, Torres attempts to fall on the sword, but Tuvok wont allow it, revealing to a stunned Janeway that it was he who made the trade, operating on the Vulcan logic that he could take on the ethical and moral quandary instead of leaving Janeway herself to be plagued by it. And Kate Mulgrew justkills it in response. The expected fury is there when she dresses down BElanna, filled with a bitter disappointment that builds on their burgeoning relationship, so soon after shed just made the controversial decision to have Torres be Chief Engineer. Although Janeway doesnt ever break out into full-on shouting, she practically growls every word she can in Torres direction, raising her voice just enough to let you know she means business. Its arguably the most fearsome shes been in the show so far, and yet its just as equally arguable that what comes next is even more fearsome, when she dismisses Torres and turns to Tuvok. The anger is no longer there on the surface, trading a melancholy softness to extoll the lengths to which she feels the betrayal of not just her most trusted senior officer, but one of her only true friends on Voyager. The look on Janeways face as Tuvok explains his logical view of the situation to here, as well as his frank estimation of the punishment he should face, is absolute heartbreak, even if Mulgrew never goes as far to allow her voice to do more than emit a tremble to show the grief Janeway feels. The scene endsthe whole episode endsin this uneasy space where both Janeway and Tuvok alike feel like their relationship has been irrevocably changed by this moment, that their trust has been broken, and could one day be rebuilt, but is in this moment raw and volatile. They can carry on with a reprimand as Captain and Security Chief, but whether or not they can carry on as confidants, as friends, is up in the air? Itsso good, but again, the next time we see them in the very next episode, everything is fine. Everything has to be. Star Trek: Voyager is an episodic show, after all. All that tension, that heartbreak, those questions, it has to fade into nothing so we can pick ourselves up and carry on with the status quo. Theres a frustration there, to be surethat the show had something with so much potential, that it executed on so well, and it ultimately cant matter. Theres a fascinating thought experiment to imagine what it wouldve been like if we had been allowed to see the ramifications of this relationships breakdown play out over weeks of stories, seasons even. But that is just not what kind of show Voyager is.And yet maybe theres something in that, that allowed us to get a moment as great as the final scene in Prime Factors is. Would a serialized show as early in its run asVoyager was here threaten to radically alter one of the most important relationships on the series so soon? Were the choices made here emboldened by the fact that this divide, this emotional rift, only had to exist within the context of this one scene, and the performances could go all out knowing that it was all going to dissipate off-screen? Whatever the reason for it, we got it anywayand in getting it we saw a glimpse of what Voyager could be at its very best. Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, whats next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.0 Комментарии ·0 Поделились ·22 Просмотры
-
On Amazon, This EcoFlow Power Station With Solar Panels Just Dropped to Its Lowest Price Evergizmodo.comAs spring approaches, you might be planning more family camping trips. And now that were getting into the rainy season, maybe youve realized you need to better prepare for potential emergencies. In either scenario, consider the EcoFlow River 3 portable power station. Currently, its available at a massive discount of 44% off the original price, reducing the cost from $338 to $189, saving you $149.See at AmazonPower All Your EssentialsThe EcoFlow River 3 delivers a power output of up to 600W, which can be boosted to 1200W with X-Boost technology. With multiple ports available, you can simultaneously power up to seven essential devices without fear of overloading. It features three AC outlets, two USB-A ports, one USB-C port, and a car charging outlet, for maximum compatibility with almost all of your electronic devices.In case of a power outage, you can still operate various essential devices efficiently. The River 3 provides sufficient power to keep a 60W refrigerator running for approximately three and a half hours, while standard 9W lightbulbs can last up to 18 hours. You can rest easy knowing that even during a blackout, youll be prepared.Ensure your device remains charged. An EcoFlow River 3 holds enough power to charge your average smartphone roughly 19 times, a camera approximately 12 times, and a drone about six times.The EcoFlow River 3 recharges swiftly, achieving full capacity in just 60 minutes. For spontaneous trips, you can begin charging it while packing, and itll be ready by the time you depart. If youre using it without access to power outlets, such as during camping, you can utilize the included solar panels. These panels allow the EcoFlow River 3 to fully charge within 7.3 hours via solar energy. Additionally, the four-panel setup can be conveniently folded to fit into compact spaces like backpacks or your vehicles trunk.This power station is crafted for portability: Weighing just 10.4 lbs, the EcoFlow River 3 features a handle that makes it simple to carry and transport. The EcoFlow River 3 is a power station thats built to last. Youll be able to get over 3,000 recharges out of this system which works out to roughly 10 years of use.See at Amazon0 Комментарии ·0 Поделились ·22 Просмотры
-
Simulating Inner Mechanism in Blender Using Greebles! #b3dwww.youtube.comSimulating inner mechanism using the Random Flow add-on in Blender.Shops:blendermarket.com/creators/blenderguppygumroad.com/blenderguppyPatreon:patreon.com/blenderguppy#b3d #conceptart #blender3d #blenderaddon #blendermarketing0 Комментарии ·0 Поделились ·20 Просмотры
-
Using Random Panels' Bevel Type to Quickly Model a Background Asset! #b3dwww.youtube.comWatch Random Flow videos: https://youtube.com/playlist?list=PLKFJy6TgdDCIC8rEkGbY09tE0IEn5j5b3&amp ;amp;si=99m9czjgBALNZY8ZUse Random Flow's Random Panel to quickly flesh out a background asset in a form of a scifi crate.Check out my tools: https://www.blenderguppy.com/add-ons Visit my shops:https://gumroad.com/blenderguppy https://blendermarket.com/creators/blenderguppy Become my Patron:https://patreon.com/blenderguppy Follow me:https://facebook.com/blenderguppy https://instagram.com/blenderguppy https://twitter.com/blenderguppy #b3d #blender3d #3dmodeling #3dtexturing #conceptart0 Комментарии ·0 Поделились ·22 Просмотры
-
The reality of generative AI in the clinicwww.microsoft.comTranscript[MUSIC][BOOK PASSAGE]PETER LEE: The workload on healthcare workers in the United States has increased dramatically over the past 20 years, and in the worst way possible. Far too much of the practical, day-to-day work of healthcare has evolved into a crushing slog of filling out and handling paperwork.[END OF BOOK PASSAGE][THEME MUSIC]This is The AI Revolution in Medicine, Revisited. Im your host, Peter Lee.Shortly after OpenAIs GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?In this series, well talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.[THEME MUSIC FADES]What I read there at the top is a passage from Chapter 2 of the book, which captures part of what were going to cover in this episode.In our book, we predicted how AI would be leveraged in the clinic. Some of those predictions, I felt, were slam dunks, for example, AI being used to listen to doctor-patient conversations and write clinical notes. There were already early products coming out in the world not using generative AI that were doing just that. But other predictions we made were bolder, for instance, on the use of generative AI as a second set of eyes, to look over the shoulder of a doctor or a nurse or a patient and spot mistakes.In this episode, Im pleased to welcome Dr. Chris Longhurst and Dr. Sara Murray to talk about how clinicians in their respective systems are using AI, their reactions to it, and whats ahead. Chris is the chief clinical and innovation officer at UC San Diego Health, and he is also the executive director of the Joan & Irwin Jacobs Center for Health Innovation. Hes in charge of UCSD Healths digital strategy, including the integration of new technologies from bedside to bench and reaching across UCSD Health, the School of Medicine, and the Jacobs School of Engineering. Chris is a board-certified pediatrician and clinical informaticist.Sara is vice president and chief health AI officer at UC San Francisco Health. Sara is an internal medicine specialist and associate professor of clinical medicine. A doctor, a professor of medicine, and a strategic health system leader, she builds infrastructure and governance processes to ensure that UCSFs deployment of AI, including both AI procured from companies as well as AI-powered tools developed in-house, are trustworthy and ethical.Ive known Chris and Sara for years, and whats really impressed me about their workand frankly, the work of all the guests well have on the showis that theyve all done something significant to advance the use of AI in healthcare.[TRANSITION MUSIC]Heres my conversation with Dr. Chris Longhurst:LEE: Chris, thank you so much for joining us today.CHRISTOPHER LONGHURST: Peter, its a pleasure to be here. Really appreciate it.LEE: Were going to get into, you know, whats happening in the clinic with AI. But I think we need to find out a little bit more about you first. I introduced you as a person with a fancy title, chief clinical and innovation officer.LONGHURST: Well, I have a little bit of a unicorn job because my portfolio includes information technology, and Im a recovering CIO after spending seven years in that role. It also includes quality patient safety, case management, and the office of our chief medical officer.And so Im really trying to unify our mission to deliver highly reliable care with these new tools in a way that allows us to transform that care. One good analogy, I think, is its about the game, right. Our job is not only to play the game and win the game using the existing tools but also to change the game by leveraging these new tools and showing the rest of the country how that can be done.LEE: And so as youre doing that, I can understand, of course, youre working at a very, kind of, senior executive level. But, you know, when Ive visited you at UCSD Health, youre also working with clinicians, doctors, and nurses all the time. In a way, I viewed you as, sort of, connective tissue between these things. Is that accurate?LONGHURST: Well, sure. And weve got, you know, several physicians who are part of the executive team who are also continuing to practice, and I think thats one of the ways in which doctors on the executive team can bring value, is being that connective tissue, being the ears on the ground and a little dose of reality.LEE: [LAUGHS] Well, in fact, that reality is really what I want to delve into. But I just want to, before getting into that, talk a little bit about AI and your encounters with AI. And I think we have to do it in two stages because there is AI and machine learning and data analytics prior to the rise of generative AI and then, of course, after. And so tell us a little bit about, you know, what got you into health informatics and AI to begin with.LONGHURST: Well, Peter, I know that you play video games, and I did too for many years. So I was an early John Carmack id Software, Castle Wolfenstein, and Doom fan.LEE: Love it.LONGHURST: And that kept me occupied because I lived out in the country on 50 acres of almond trees. And so it was computer gaming that first got me into computers.But during medical school, I decided to pursue graduate work in this field called health informatics. And actually my masters thesis was using machine learning to help identify and distinguish innocent from pathologic heart murmurs in children. And I worked with Dr. Nancy Reed at UC Davis, who had programmed using Lisp, a really fancy tool to do exactly that.And I will tell you that if I never see another parentheses in Lisp code again, itll be too soon. So I spent a solid year on that.LEE: [LAUGHS] No, no, but you should wear that as a badge of honor. And I will guess that no other guest on this podcast series will have programmed in Lisp. So kudos to you.LONGHURST: [LAUGHS] Well, it was a lot of work, and I learned a lot, but as you can imagine, it wasnt highly successful at the time. And fast forward, weve had lots of traditional machine learning kind of activities using discrete data for predictive analytics to help predict flow in the hospital and even sepsis, which we can talk about. But as you said, the advent of generative AI in the fall of 2022 was a real game-changer.LEE: Well, you have this interest in technology, and, in fact, I do know you as a fairly intensely geeky person. Really, I think maybe thats one reason why weve been attracted to each other. But you also got drawn into medicine. Where did that come from?LONGHURST: So my father was a practicing cardiologist and scientist. He was MD, PhD trained, and he really shared with me both a love of medicine but also science. I worked in his lab for three summers, and it was during college I decided I wanted to apply to medical school because the human side of the science really drew me in.But my father was the one who really identified it was important to cross-train. And thats why I decided to take time off to do that masters degree in health informatics and see if I could figure out how to take two disparate fields and really combine them into one.I actually went down to Stanford to become a pediatrician because they have a standalone childrens hospital thats one of the best in the country. And I still practice pediatrics and see newborns, and its a passion for me and part of my identity.LEE: Well, Im just endlessly fascinated and impressed with people who can span these two worlds in the way that youve done. So now, you know, 2022, in November, ChatGPT gets released to the world, and then, you know, a few months later, GPT-4, and then, of course, in the last two years, so much has happened. But what was your first encounter with what we now know of as generative AI?LONGHURST: So I remember when ChatGPT was released, and, you know, some of my computer science-type of nerd friends, we were on text threads, you know, with a lot of mind-blowing emojis. But when it really hit medicine was when I got a call right after Thanksgiving in 2022 from my colleague.He was playing with ChatGPT, and he said to me, Chris, Ive been feeding it patient questions and you wouldnt believe the responses. And he emailed some of the examples to me, and my mind was blown.And so thats when I became one of the reviewers on the paper that was published in April of 2023 that showed not only could ChatGPT help answer questions from patients in a high-quality way, but it also expressed a tremendous amount of empathy.[1] And in fact, in our review, the clickbait headlines that came out of the paper were that the chatbot was both higher quality and more empathetic than doctors.But that wasnt my takeaway at all. In fact, IllAnd so, of course, thats how we became one of the first two sites in the country to roll out GPT inside our electronic health record to help draft answers to patient questions.LEE: And, you know, one thing thats worth emphasizing in the story that youve just told is that there is no other major health system that has been confronting the reality of generative AI longer than UC San Diego Healthand I think largely because of your drive and early adoption.And many listeners of this podcast will know what Epic is, but many will not. And so its worth saying that Epic is a very important creator of an electronic health records system. And of course, UC San Diego Health uses Epic to store all of the clinical data for its patients.And then Sumit is, of course, Sumit Rana, who is president at Epic.LONGHURST:And in truth, you know, health systems that have thought through this, most of the answers are not actually generated by the doctors themselves. Many times, its mid-level providers, protocol schedulers, other things, because the questions can be about anything from rescheduling an appointment to a medication refill. They dont all require doctors.When they do, its a more complicated question, and sometimes can require a more complicated answer. And in many cases, the clinicians will see a long complex question, and rather than typing an answer, theyll say, You know, this is complicated. Why dont you schedule a visit with me so we can talk about it more?LEE: Yeah, so now youve made a decision to contact people at Epic to what posit the idea that AI might be able to make responding to patient queries easier? Is that the story here?LONGHURST: Thats exactly right. And Sumit knew well that this is a challenge across many organizations. This is not unique to UC San Diego or Stanford. And theres been a lot of publications about it. Its even been in the lay press. So our hypothesis was that using GPT to help draft responses for doctors would save them time, make it easier, and potentially result in higher-quality, more empathetic answers to patients.LEE: And so now the thing that I was so impressed with is you actually did a carefully controlled study to try to understand how well does that work. So tell us a little bit first about the results of that study but then how you set it up.LONGHURST: Sure. Well, first, I want to acknowledge something you said at the beginning, which is one of my hats is the executive director of the Joan & Irwin Jacobs Center for Health Innovation. And were incredibly grateful to the Jacobs for their gift, which has allowed us to not only implement AI as part of hospital operations but also to have resources that other health systems may not have to be able to study outcomes. And so that really enabled what were going to talk about here.LEE: Right. By the way, one of the things I was personally so fascinated by is, of course, in our book, we speculated that things like after-visit notes to patients, responding to patient queries might be something that happens. And you, at the same time we were writing the book, were actually actively trying to make that real, which is just incredible and for me, and I think my coauthors, pretty affirming.LONGHURST: I think you guys were really prescient in your vision. The book is tremendous. I have a signed copy of Peters book, and I recommend it for all your listeners. [LAUGHTER]LEE: All right, so now what have you found about LONGHURST: Yeah.LEE: generative AI?LONGHURST: Yeah. Well, first to understand what we found, you have to understand how we built [the AI inbox response tool]. And so Stanford and UC San Diego really collaborated with Epic on designing what this would look like. So doctor gets that patient message. We feed some information to GPT thats not only the message but also some information about the patienttheir problems and medications and past medical and surgical history and that sort of thing.LEE: Is there a privacy concern that patients should be worried about when that happens?LONGHURST: Yeah, its a really good question. Theres not because were operating in partnership with Epic and Microsoft in a HIPAA-compliant cloud. And so that data is not only secure and private, but thats our top priority, is keeping it that way.LEE: Great.LONGHURST: So once we feed that into GPT, of course, we very quickly get a draft message that we could send to a patient. But we chose not to just send that message to a patient. So part of our AI governance is keeping a human in the loop. And theres two buttons that allow that clinician to review the message. One button says Edit draft message, and the other button says Start new blank message. So theres no button that says just Send now. And that really is illustrative of the approach that we took. The second thing, though, that we chose to do I think is really interesting from a conversation standpoint is that our AI governance, as they were looking at this, said, You know, AI is new and novel. It can be scary to patients. And if we want to maximize trust with our patients, we should maximize transparency. And so anytime a clinician uses the button that says Edit draft response, we automatically append something in the message that says, This message was automatically generated and reviewed and edited by your doctor. We felt strongly that was the right approach, and weve had a lot of positive feedback.LEE: And so well want to get into, you know, how good these messages are, whether there are issues with bias or hallucination, but before doing that, you know, on this human in loop, this was another theme in our book. And in fact, we recommended this. But there were other health systems around the country that were also later experimenting with similar ideas. And some have taken different approaches. In fact, as time has gone on, if anything, it seems like its become a little bit less clear, this sort of labeling idea. Has your view on this evolved at all over the last two years?LONGHURST: First of all, Im glad that we did it. I think it was the right choice for University of California, and in fact, the other four UC sites are all doing this, as well. There is variability across the organizations that are using this functionality, and as you suggest, theres tens of thousands of physicians and hundreds of thousands if not millions of patients receiving these messages. And its been highlighted a bit in the press.I can tell you that talking about our approach to transparency, one of our lawmakers in the state of California heard about this and actually proposed a bill that was signed into legislation by our governor so that effective Jan. 1, any communication with patients that uses AI has to be disclosed with those patients. And so there is some thought that this is perhaps the right approach.I dont think that its a perfect approach, though. Were using AI in more and more ways, and its not as if were going to be able to disclose every single time that were doing it to prioritize, you know, scheduling for the sickest patients or to help operationally on billing or something else. And so I think that there are other ways we need to figure it out. But we have called on national societies and others to try to create some guidelines around this because we should be as transparent as we can with our patients.LEE: Obviously, one of the issuesand we highlighted this a lot in our bookis the problem of hallucination. And surely this must be an issue when youre having AI draft these notes to patients. What have you found?LONGHURST: We were worried about that when we rolled it out. And what we found is not only were there very few hallucinations, in some cases, our doctors were learning from the GPT. And I can give you an example. When a patient who had had a visit wrote their doctor afterwards and said, Doc, Ive been thinking a lot about what we discussed in quitting smoking marijuana. And the GPT draft reply said something to the effect of, Thats great news. Heres a bunch of evidence on how smoking marijuana can harm your lungs and cause other effects. And by the way, since you live in the state of California, heres the marijuana quitters helpline. And the doctor who was sending this called me up to tell me about it. And I said, well, is there a marijuana quitters helpline in the state of California? And he said, I didnt know, so I Googled it. And yeah, there is. And so thats an example of the GPT actually having more information than, you know, a primary care clinician might have. And so there are cases clearly where the GPT can help us increase the quality. In addition, some of the feedback that weve been getting both anecdotally and now measuring is that these draft responses do carry that tone of empathy that Dr. [John] Ayers [2] and I saw in the original manuscript. And weve heard from our clinicians that its reminding them to be empathetic because you dont always have that time when youre hammering out a quick short message, right?LEE: You know,LONGHURST: Exactly right, Peter. In fact, one of the findings in Dr. Ayerss manuscript that didnt get as much attention but I think is really important was the difference in length between the responses. So I was one of the putatively blinded reviewers, but as I was looking at the questions and answers, it was really obvious which ones were the chatbot and which ones were the doctors because the chatbot was always, you know, three or four paragraphs and the doctor was three or four sentences, right. Its about time. And so we saw that in the results of our study.LEE: All right, so now lets get into those results.LONGHURST: OK. Well, first of all, my hypothesis was that this would help us save time, and I was wrong. It turns out a busy primary care clinician might get about 30 messages a day from patients, and each one of those messages might take about 30 seconds to type a quick response, a two-sentence response, a dot phrase, a macro. Your labs are normal. No need to worry. Ill call you if anything comes up. After we implemented the AI tool, it still took about 30 seconds per message to respond. But we saw that the responses were two to three times longer on average, and they carried a more empathetic tone. [3] And our physicians told us it decreased cognitive burden, which is not surprising because any of you have written know that its much easier to edit somebody elses copy than it is to face a blank screen, right. Thats why I like to be senior author, not lead author.And so the tool actually helped quite a bit, but it didnt help in the ways that we had expected necessarily. There are some other sites that have now found a little bit of time savings, but its really nominal overall. The Stanford study (opens in new tab) that was done at the same timeand we actually had some shared coauthorsmeasured physician burnout using a validated survey, and they saw a decrease in measured physician burnout. And so there are clear advantages to this, and were still learning more.In fact, weve now rolled this out not only to all of our physicians, but to all of our nurses who help answer those messages in many different clinics. And one of the things that were findingand Dr. CT Lin at University of Colorado recently published (opens in new tab)is that this tool might actually help those mid-level providers even more because its really good at protocolized responses. I mentioned at the beginning, some of the questions that come to the physicians may be more the edge cases that require a little bit less protocolized kind of answers. And so as we get into academic subspecialties like gynecology oncology, the GPT might not be dishing up a draft message thats quite as useful. But if youre a nurse in obstetrics and youre getting very routine pregnancy questions, it could save a ton of time. And so weve rolled this out broadly.I want to acknowledge the partnership with Seth Hain and the team at Epic, whove just been fantastic. And were finding all sorts of new ways to integrate the GPT tools into our electronic health record, as well.LEE: Yeah. Certainly the doctors and nurses that Ive encountered that have access to this feature, they just dont want to give it up. But its so interesting that it actually doesnt really save time. Is that a problem? Because, of course, you know, there seems to be a workforce shortage in healthcare, a need to lower costs and have greater efficiencies. You know, how do you think about that?LONGHURST: Great question. There are so many opportunities, as youve kind of mentioned. I mean, healthcare is full of waste and inefficiency, and I am super bullish on how these generative AI tools are going to help us reduce some of that inefficiency.So everything from revenue cycle to our call centers to operations efficiency, I think, can be positively impacted, and those things make more resources available for clinicians and others. When we think about, you know, saving clinicians time, I dont think its necessarily, sort of, the communicating with patients where you want to save that time actually. I think what we want to do is we want to offload some of those administrative tasks that, you know, take a lot of time for our physicians.So weve measured pajama time in our doctors, and on average, a busy primary care clinician can spend one to two hours after clinic doing things. But only about 15 minutes is answering messages from patients. Actually, the bulk of the time after hours is documenting the notes that are required from those visits, right. And those notes are used for a number of different purposes, not only communicating to the next doctor who sees the patient but also for billing purposes and compliance purposes and medical legal purposes. So another really exciting area is AI scribes.LEE: Yeah. And so, you know, well get into scribes and actually other possibilities. I wonder, though, about this empathy issue. Because as computer scientists, we know that you can fall into traps if you anthropomorphize these AI systems or any machine. So in this study, how was that measured, and how real do think that is?LONGHURST: So in the study, youll see anecdotal or qualitative evidence about empathy. We have a follow-up study that will be published soon where weve actually measured empathy using some more quantitative tools, and there is no doubt that the chatbot-generated drafts are coming through with more empathy. And weve heard this from a number of our doctors, so its not surprising. Heres one of the more surprising things though. I published a paper last year with Dr. Sally Baxter (opens in new tab), one of our ophthalmologists, and she actually looked at messages with a negative tone. It turns out, not surprisingly, healthcare can be frustrating. And stressed patients can send some pretty nasty messages to their care teams. [LAUGHTER] And you can imagine being a busy, LEE: Ive done it. [LAUGHS]LONGHURST: tired, exhausted clinician, and receiving a bit of a nasty gram from one of your patients can be pretty frustrating. And the GPT is actually really helpful in those instances in helping draft a pretty empathetic response when I think the human instinct would be a pretty nasty one. [LAUGHTER] I should probably use it in my email, Peter.LEE: And is the patient experience, the actually lived experience of patients when they receive these notes, are you absolutely convinced and certain that they are also benefiting from this empathetic tone?LONGHURST: I am. In fact, in our paper, we also found that the messages going to patients that had been drafted with the AI tool were two to three times longer (opens in new tab) than the messages going to patients that werent using the drafts. And so its clear theres more content going and that content is either contributing to a greater sense of empathy and relationship among the patients as well as the clinicians, and/or in some cases, that content may be educating the patients or even reducing the need for follow-up visits.LEE: Yeah, so now I think an important thing to share with the audience here is, you know, healthcare, of course, is a very highly regulated industry for good reasons. There are issues of safety and privacy that have to be guarded very, very carefully and thoroughly. And for that reason, clinical studies oftentimes have very carefully developed controls and randomization setups. And so to what extent was that done in this case? Because here, its not like youre testing a new drug. Its something thats a little fuzzier, isnt it?LONGHURST: Yeah, thats right, Peter. And credit to the lead author, Dr. Ming Tai-Seale, we actually did randomize. And so thats unusual in these type of studies. We actually got IRB [institutional review board] exemption to do this as a randomized QI study. And it was a crossover study because all the doctors wanted the functionality. So what we tested was the early adopters versus the late adopters. And we compared at the same time the early adopters to those who werent using the functionality and then later the late adopters to the folks that werent using the functionality.LEE: And in that type of study, you might also, depending on how the randomization is set up, also have to have doctors some days using it and some days not having access. Did that also happen?LONGHURST: We did, but it wasnt on a day-to-day basis. It was more a month-to-month basis.LEE: Uh-huh. And what kind of conversation do you have with a doctor that might be attached to a technology and then be told for the next month you dont get to use it?LONGHURST: [LAUGHS] The good news is because of a doctors medical training, they all understood the need for it. And the conversation was sort of, hey, were going to need you to stop using that for a month so that we can compare it, but well give it back to you afterwards.LEE: [LAUGHS] OK, great. All right. So now we made some other predictions. So we talked about, you know, responding to patients. You briefly mentioned clinical note-taking. We also made guesses about other types of paperwork, you know, filling out prior authorization requests or referral letters, maybe for a doctor to refer to a specialist. We even made some guesses about a second set of eyes on medications, on various treatment options, diagnoses. What of these things have happened and what hasnt happened, at least in your clinical experience?LONGHURST: Your guesses were spot on. And I would say almost all of them have already happened and are happening today at UC San Diego and many other health systems. We have a HIPAA-compliant GPT instance that can be used for things like generating patient letters, generating referral letters, even generating patient education with patient-friendly language. And thats a common use case. The second set of eyes on medications is something that were exploring but have not yet rolled out. One of the areas Im really excited about is reporting. So Johns Hopkins did a study a couple of years ago that showed an average academic medical center our size spends about $5 million annually (opens in new tab) just reporting on quality measures that are regulatory requirements. And thats about accurate for us.We published a paper just last fall showing that large language models could help to pre-populate quality data (opens in new tab) for things like sepsis reporting in a really effective way. It was like 91% accurate. And so thats a huge time savings and efficiency opportunity. Again, allows us to redeploy those qualities staff. Were now looking at things like how do we use large language models to review charts for peer review to help ensure ongoing, you know, accuracy and mitigate risk. Im really passionate about the whole space of using AI to improve quality and patient safety in particular.Your readers may be familiar with the famous report in 1999, To Err is Human (opens in new tab), that suggests a hundred thousand Americans die on an annual basis from medical errors. And unfortunately the data shows we really havent made great progress in 25 years, but these new tools give us the opportunity to impact that in a really meaningful way. This is a turning point in healthcare.LEE: Yeah, medication errorsactually, all manner of medical errorsI think has been just such a frustrating problem. And, you know, I think this gives us some new hope. Well, lets look ahead a little bit. And just to be a little bit provocative, you know, one question that I get asked a lot by both patients and clinicians is, you know, Will AI replace doctors sometime in the future? What are your thoughts?LONGHURST: So the pat response is AI wont replace doctors, but AI will replace doctors who dont use AI. And the implication there, of course, is that a doctor using AI will end up being a more effective practitioner than a doctor who doesnt. And I think thats absolutely true. From a medical legal standpoint, what is standard of care today and what is standard of care five or 10 years from now will be different. And I think there will be a point where doctors who arent using AI regularly, it would almost be unconscionable.LEE: Yeah, I think there are already some areas where weve seen this happen. My favorite example is with the technology of ultrasound, where if youre a gynecologist or some part of internal medicine, there are some diagnostic procedures where it would really be malpractice not to use ultrasound. Whereas in the late 1950s, the safety and also the doctor training to read ultrasound images were all called into question. And so lets look ahead two years from now, five years from now, 10 years from now. And on those three time frames, you know, what do you thinkbased on the practice of medicine today, what doctors and nurses are doing in clinic every day todaywhat do you think the biggest differences will be two years from now, five years from now, and 10 years from now?LONGHURST: Great question, Peter. So first of all, 10 years from now, I think that patients will be still coming to clinic. Doctors will still be seeing them. Hopefully well have more house calls and care occurring outside the clinic with remote monitoring and things like that. But the most important part of healthcare is the humanism. And so what Im really excited about is AI helping to restore humanism in medical care. Because weve lost some of it over the last 20, 30 years as healthcare has become more corporate.So in the next two to five years, some things I expect to see is AI baked into more workflows. So AI scribes are going to become incredibly commonplace. I also think that there are huge opportunities to use those scribes to help reduce errors in diagnosis. So five or seven years from now, I think that when youre speaking to your physician about your symptoms and other things, the scribe is going to be developing a differential diagnosis and helping recommend not only the right follow-up tests or imaging but even the physical exam findings that the doctor might want to look for in particular to help make a diagnosis.Dirty secret in healthcare, Peter, is that 50% of doctors are below average. Its just math. And I think that the AI can help raise all of our doctors. So its like Lake Wobegon. Theyre all above average. It has important implications for the workforce as you were saying. Do we need all visits to be with primary care doctors? Will mid-level providers augmented by AI be able to do as great a job as many of our physicians do? I think these are unanswered questions today that need to be explored. And then there was a really stimulating editorial in The New York Times recently by Dr. Eric Topol (opens in new tab), and he was waxing philosophic about a recent study that showed AI could interpret X-rays with 90% accuracy and radiologists actually achieve about 72% accuracy (opens in new tab).LEE: Right.LONGHURST: The study looked at, how did the radiologists do with AI working together? And they got about 74% accuracy. So the doctors didnt believe the AI. They thought that they were in the right, and the inference that Eric took that I agree with is that rather than always looking for ways to combine the two, we should be thinking about those tasks that are amenable to automation that could be offloaded with AI. So that our physicians are focused on the things that theyre great at, which is not only the humanism in healthcare but a lot of those edge cases we talked about. So lets take mammogram screening as an example, chest X-ray screening. Theres going to be a point in the next five years where all first reads are being done by AI, and then its a subset of those that are positive that need to be reviewed by physicians. And that helps free up radiologists to do a lot of other things that we need them to do.LEE: Wow, that is really just such a great vision for the future. And I call some of this the flip, where even patient expectations on the use of technology flips from fear and uncertainty to, you know, you would try to do this without the technology? And I think you just really put a lot of color and detail on that. Well, Chris, thank you so much for this. On that groundbreaking paper from April of 2023, well put a link to it. Its a really great thing to read. And of course, youve published extensively since then. But I cant thank you enough for just all the great work that youre doing. Its really changing medicine.[TRANSITION MUSIC]LONGHURST: Peter, cant thank you enough for the opportunity to be here today and the partnership with Microsoft to make this all possible.LEE: I always love talking to Chris because he really is a prime example of an important breed of doctor, a doctor who has clinical experience but is also world-class tech geek. [LAUGHS] You know, its surprising to me, and pleasantly so, that the traditional gold standard of randomized trials that Chris has employed can be used to assess the viability of generative AI, not just for things like medical diagnosis, but even for seemingly mundane things like writing email notes to patients.The other surprise is that the use of AI, at least in the in-basket task, which involves doctors having to respond to emails from patients, doesnt seem to save much time for doctors, even though the AI is drafting those notes. Doctors seem to love the reduced cognitive burden, and patients seem to appreciate the greater detail and friendliness that AI provides, but its not yet a big timesaver. And of course, the biggest surprise out of the conversation with Chris was his celebrated paper back two years ago now on the idea that AI notes are perceived by patients as being more empathetic than notes written by human doctors. Wow.Lets go ahead to my conversation with Dr. Sara Murray:LEE: Sara, Im thrilled youre here. Welcome.SARA MURRAY: Thank you so much for having me.LEE: You know, you have actually a lot of roles, and I know thats not so uncommon for people at the leading academic medical institutions. But, you know, I think for our audience, understanding what a chief health AI officer does, an associate professor of clinical medicinewhat does it all mean? And so to start, when you talk to someone, say, like your parents, how do you describe your job? You know, how do you spend a typical day at work?MURRAY: So first and foremost, I do always introduce myself as a physician because thats how I identify, thats how I trained. But in my current role, as the chief health AI officer, Im really responsible for the vision and strategy for how we use trustworthy AI at scale to solve the biggest problems in our health system. And so I think theres a couple key important points about that. One is that we have to be very careful that everything were doing in healthcare is trustworthy, meaning its safe, its ethical, its doing what we hope its doing, and its not causing any unexpected harm.And then, you know, second, we really want to be doing things that affect, you know, the population at large of the patients were taking care of. And so I think if you look historically at whats happened with AI in healthcare, youve seen little studies here and there, but nothing broadly affecting or transforming how we deliver care. And I think now that were in this generative AI era, we have the tools to start thinking about how were doing that. And so thats part of my role.LEE: And Im assuming a chief health AI officer is not a role that has been around for a long time. Is this fairly new at UCSF, or has this particular job title been around?MURRAY: No, its a relatively new role, actually. I came into this role about 18 months ago. I am the first chief health AI officer at UCSF, and I actually wrote the paper defining the role (opens in new tab) withLEE: Its so interesting because I would say in the old days, you know, like five years ago, [LAUGHS] information technology in a hospital or health-system setting might be under the control and responsibility of a chief information officer, a CIO, or an IT, you know, chief. Or if its maybe some sort of medical device technology integration, maybe its some engineering-type of leader, a chief technology officer. But youre different, and in fact the role that I think I would credit you with, sort of, making the blueprint for seems different because its actually doctors, practicing clinicians, who tend to inhabit these roles. Is there a reason why its different that way? Like, a typical CIO is not a clinician.MURRAY: Yeah, so I report to our CIO. And I think that theres a recognition that you need a clinician who really understands in practice how the tools can be deployed effectively. So its not enough to just understand the technology, but you really have to understand the use cases. And I think when youre seeing physician chief health AI officers pop up around the country, its because theyre people who both understand the technologynot to the level you do obviouslybut to some sufficient level and then understand how to use these tools in clinical care and where they can drive value and what the risks are in clinical care and that type of thing. And so I think itd be hard for it not to be some type of clinician in this role.LEE: So Im going to want to get into, you know, whats really happening in clinic, but before that, Ive been asking our guests about their stages of AI grief, [LAUGHS] as I like to put it. And for most people, Ive been talking about the experiences and encounters with machine learning and AI before ChatGPT and then afterwards. And so can you tell us a little bit about, you know, how did you get into AI in the first place and what were your first encounters like?MURRAY: Yeah. So I actually started out as a health services researcher, and this was before we had electronic health records [EHR], when we were still writing our notes on carbon copy in the elevators, and a lot of the data we used was actually from claims data. And that was the kind of rich data source at the time, but as you know, that was very limited.And so when we went live with our electronic health record, I realized there was this tremendous opportunity to really use rich clinical data for research. And so I initially started collaborating with folks down at Stanford to do machine learning to identify, you know, rare diseases like lupus in the electronic health record but quickly realized there was this real gap in the health system for using data in an actionable way.And so I built what was initially our advanced analytics team, grew into our data science team, and is now our health AI team as our ability to use the data in more sophisticated ways evolved. But if we think about, I guess, the pre-generative era and my first encounter with AI or at least AI deployment in healthcare, you know, we initially, gosh, it was probably eight or nine years ago where we got access through our EHR vendor to some initial predictive tools, and these were relatively simple tools, but they were predicting things we care about in healthcare, like whos not going make it to a clinic visit or how long patients are going stay in the hospital.And so theres a lot of interest in, you know, predicting who might not make it to a clinic visit because we have big access issues with it being difficult for patients to get appointments, and the idea was that if you knew who wouldnt show, you could actually put someone else in that slot, and its called overbooking. And so when we looked at the initial model, it was striking to me how risky it was for vulnerable patient populations because immediately it was obvious that this model was likely to overbook people by race, by body weight, by things that were clearly protected patient characteristics.And so we did a lot of work initially with that model and a lot of education around how these tools could be biased. But the risk existed, and as we continued to look at more of these models, we found there were a lot of issues with trustworthiness. You know, there was a length-of-stay prediction model that my team was able to outperform with a pair of dice. And when I talked to other systems about not implementing this model, you know, folks said, but it must be useful a little bit. I was like, actually, you know, if the dice is better, its not useful at all. [LAUGHS]LEE: Right!MURRAY: And so there was very little out there to frame this, but we quickly realized we have to start putting something together because theres a lot of hype and theres a lot of hope, but theres also a lot of risk here. And so that was my pre-generative moment.LEE: You know, just before I get to your post-generative moment, you know, this story that you told, I sometimes refer to it as the healthcare IT worlds version of irrational exuberance. Because I think one thing that Ive learned, and I have to say Ive been guilty personally as a techie, you look at some of the problems that the world of healthcare faces, and to a techie first encountering this, a lot of it looks like common sense. Of course, we can build a model and predict these things.And you sort of dont understand some of the realities, as youve described, that make this complicated. And at the same time, from healthcare professionals, I sometimes think they look at all of this dazzling machine learning magic and also are kind of overly optimistic that it can solve so many problems.And it does create this danger, this irrational exuberance, that both sides kind of get into a reinforcing cycle where theyre too quick to adopt technologies without thinking through the implications more carefully. I dont know if that resonates with you at all.MURRAY: Yeah, totally. I think theres a real educational opportunity here because its the you dont know what you dont know phenomenon. And so I do think there is a lot of work in healthcare to be done around, you know, people understanding the strengths and limitations of these tools because theyre not magic, but they are perceived to be magic.And likewise, you know, I think the tech world often doesnt understand, you know, how healthcare is practiced and doesnt think through these risks in the same way we do, right. So I know that some of the vulnerable patients who mightve been overbooked by that algorithm are the people who I most need to see in clinic and are the people who would be, you know, most slighted if that they show up and the other patient shows up and now you have an overworked clinician. But I just think those are stages, you know, further down the pathway of utilization of these algorithms that people dont think of when theyre initially developing them.And so one of the things we actually, you know, require in our AI oversight process is when folks come to the table with a tool, they have to have a plan for how its going to be used and operationalized. And a lot of things die right there, honestly, because folks have built a cool tool, but they dont know whos going to use it in clinic, who the clinical champions are, how itll be acted on, and you cant really evaluate whether these tools are trustworthy unless youve thought through all of that.Because you can imagine using the same algorithm in dramatically different ways, right. If youre using the no-show model to do targeted outreach and send people a free Lyft if they have transportation issues, thats going to have very different outcomes than overbooking folks.LEE: Its so interesting and Im going to want to get back to this topic because I think it also speaks to the challenges of how do you integrate technologies into the daily workflow of a clinic. And I know this is something you think about a lot, but lets get back now to my original question about your AI moments. So now November 2022, ChatGPT happens, and what is your encounter with this new technology?MURRAY: Yeah. So I used to be on MedTwitter, or I still am actually; its just not as active anymore. But I would say, you know, MedTwitter went crazy after ChatGPT was initially released and it was largely filled with catchy poems and people, you know, having fun LEE: [LAUGHS] Guilty.MURRAY: Yeah, exactly. I still use poems. And people having fun trying to make it hallucinate. And so, you know, I wentI was guilty of that, as welland so one of the things I initially did was I asked it to do something crazy. So I asked it, draft me a letter for a prior authorization request for a drug called Apixaban, which is a blood thinner, to treat insomnia. And if you practice clinical medicine, you know that we would never use a blood thinner to treat insomnia. But it wrote me such a compelling letter that I actually went back to PubMed and I made sure that I wasnt missing anything, like some unexpected side effect. I wasnt missing anything and in fact it was hallucination. And so at that moment I said, this is very promising technology, but this is still a party trick.LEE: Yeah.MURRAY: A few months later, I went and did the exact same prompt, and I got a lecture, instead of a draft, about how it would be unethical [LAUGHTER] and unsafe for me to draft such a request. And so, you know, I realized these tools were rapidly evolving, and the game was just going to be changing very quickly. I think the other thing that, you know, weve never seen before is the deployment of a technology at scale like we have with AI scribes.So this is a technology that was in its infancy, you know, two years ago, and is now largely a commodity deployed at scale across many health systems. A very short period of time. Theres been no government incentives for people to do this. And so it clearly works well enough to be used in clinics. And I think these tools, you know, like AI scribes, have the opportunity to really undo a lot of the harm that the electronic health record implementations were perceived to have caused.LEE: What is a scribe, first off?MURRAY: Yeah, so AI scribes or, as were now calling them, AI assistants or ambient assistants, are tools that essentially listen to your clinical interaction. We record them with the permission of a patient, with consent, and then they draft a clinical note, and they can also draft other things like the patient instructions. And the idea is those drafts are very helpful to clinicians, and they have to review them and edit them, but it saves a lot of the furious typing that was previously happening during patient encounters.LEE: We have been talking also to Chris Longhurst, your colleague at UC San Diego, and, you know, he mentions also the importance of having appropriate billing codes in those notes, which is yet another burden. Of course, when Carey, Zak, and I wrote our book, we predicted that AI scribes would get better and would find wider use because of the improvement in technology. Let me start by asking, do you yourself use an AI scribe?MURRAY: So I do not use it yet because Im an inpatient doctor, and we have deployed them to all ambulatory clinic doctors because thats where the technology is tried and true. So were looking now to deploy it in the inpatient setting, but were doing very initial testing.LEE: And what are the reasons for not integrating it into the inpatient setting?MURRAY: Well, theres two things actually. Most inpatient documentation work, I would say, is follow-up documentation. And so youre often taking your prior notes and making small changes to it as you change the care from day to day. And so the tools are just, all of the companies are working on this, but right now they dont really incorporate your prior documentation or note when they draft your note for today.The second reason is that a lot of the decision-making that we do in the inpatient setting is asynchronous with the patient. So well often have a conversation in the morning with the patient in their room, and then Ill see some labs come back and Ill make decisions and act on those labs and give the patient a call later and let them know whats going on. And so its not a very succinct encounter, and so the technology is going to have to be a little bit different to work in that case, I think.LEE: Right, and so these are distinct workflows from the ambulatory setting, where it is the classic, youre sitting with a patient in an exam room having an encounter.MURRAY: Mm-hmm. Exactly. And all your decisions are made there. And I would say its also different from nursing. Were also looking at deploying these tools to nurses. But a lot of their documentation is in something called flowsheets. They write in columns, you know, specific numbers, and so for them to use these tools, theyd have to start saying to the patient, sounds like your pain is a five. Your blood pressure is 120 over 60. And so those are different workflows theyd have to adopt to use the tools.LEE: So youve been in the position of having to oversee the integration of AI scribes into UCSF health. From your perspective how were clinical staff actually viewing all of this?MURRAY: So I would say clinical staff are largely very excited, receptive, and would like us to move faster. And in fact, I gave a town hall to UCSF, and all of the comments were, when is this coming for APPs [advanced practice providers]? When is this coming for allied health professionals? And so people want this across healthcare. Its not just doctors. But at the same time, you know, I think theres a technology adoption curve, and about half of our ambulatory clinicians have signed up and about a third of them are now using the tool. And so we are now doing outreach to figure out who is not using it, why arent they using it, and what can we do to increase adoption. Or are there true barriers that we need to help folks overcome?LEE: And when you do these things, of course, there are risks. And as you were mentioning several times before, you were really concerned about hallucinations, about trustworthiness. So what were the steps that you took at UCSF to make these integrations happen?MURRAY: Yeah, so we have a AI oversight process for all tools that come into our healthcare with AI, regardless of where theyre coming from. So industry tools, internally developed tools, and research tools come through the same process. And we have a committee that is quite multidisciplinary. We have health system leaders, data scientists, bioethicists, researchers, health-equity experts. And through our process, we break down the AI lifecycle to a couple key places where these tools come for committee review. And so for every AI deployment, we expect people to establish performance metrics, fairness metrics, and we help them with figuring out what those things should be.We were also fortunate to receive a donation to build a AI monitoring platform, which were working on now at UCSF. We call it our Impact Monitoring Platform for AI and Clinical Care, IMPACC, and AI scribes is actually our first use case. And so on that platform, we have a metric adjudication process where weve established, you know, what do we really care about for our health system executive leaders, what do we really care about for, you know, ensuring safety and trustworthiness, and then, you know, what are our patients going to want to know? Because we want to also be transparent with our patients about the use of these tools. And so we have processes for doing all this work.I think the challenge is actually how we scale these processes as more and more tools come through because as you could imagine, a lot of conversation with a lot of stakeholders to figure out what and how we measure things right now.LEE: And so theres so much to get into there, but I actually want to zoom in on the actual experience that doctors, nurses, and patients are having. And, you know, do you find that AI is meeting expectations? Is it making a difference, positive or negative, in peoples lives? And what kinds of potential surprises are people encountering?MURRAY: Mm-hmm. So were collecting data in a couple of ways. First, were surveying clinicians before and after their experience, and we are hearing from folks that they feel like their clinic work is more manageable, that theyre more able to finish their documentation in a timely fashion.And then were looking at actual metrics that we can extract from the EHR around how long people are spending doing things. And that data is largely aligning with what people are reporting, although the caveat is theyre not saving enough time for us to have them see more patients. And so weve been very explicit at UCSF around making it clear that this is a tool to improve experience and not to improve efficiency.So were not expecting for people to see more patients as a result of using this tool. We want their clinic experience to be more meaningful. But then the other thing thats interesting that folks share is this tremendous relief of cognitive burden that folks feel when using this tool. So they may have been really efficient before. You know, they could get all their work done. They could type while they were talking to their patients. But they didnt actually, you know, get to look at their patients eye to eye and have the meaningful conversation that people went into medicine for. And so were hearing that, as well.And I think one of the things thats going to be important to us is actually measuring that moving forward. And that is matched by some of the feedback were getting from patients. So we have quotes from patients where theyve said, you know, my doctor is using this new tool and its amazing. Were just having eye-to-eye conversations. Keep using it. So I think thats really important.LEE: Ive been pushing my own primary care doctor to get into this because I really depend on her. I love her dearly, but we never Im always looking at her back as shes typing at a computer during our encounters.[LAUGHS]So, Sara, while were talking about efficiency, and at least the early evidence doesnt show clear efficiency gains, it does actually beg the question about how or why health systems, many of which are financially, you know, not swimming in money, how or why they could adopt these things.And then we could also even imagine that there are even more important applications in the future that, you know, might require quite a bit of expense on developers as well as procurers of these things. You know, whats your point of view on the I guess we would call this the ROI question about AI?MURRAY: Mm-hmm. I think this is a really challenging area because return on investment is very important to health systems that are trying to figure out how to spend a limited budget to improve care delivery. And so I think weve started to see a lot of small use cases that prove this technology could likely be beneficial.So there are use cases that you may have heard of from Dr. Longhurst around drafting responses to patient messages, for example, where weve seen that this technology is helpful but doesnt get us all the way there. And thats because these technologies are actually quite expensive. And when you want to process large amounts of data, thats called tokens, and tokens cost money.And so I think one of the challenges when we envision the future of healthcare, were not really envisioning the expense of querying the entire medical record through a large language model. And were going to have to build systems from a technology standpoint that can do that work in a more affordable way for us to be able to deliver really high-value use cases to clinicians that involve processing that.And so those are use cases like summarizing large parts of the patients medical record, providing really meaningful clinical decision support that takes into account the patients entire medical history. We havent seen those types of use cases really come into being yet, largely because, you know, theyre technically a bit more complex to do well and theyre expensive, but theyre completely feasible.LEE: Yeah. You know, what youre saying really resonates so strongly from the tech industrys perspective. You know, one way that that problem manifests itself is shareholders in big tech companies like ours more or less expect theyre paying a high premiuma high multiple on the share pricebecause theyre expecting our revenues to grow at very spectacular rates, double digit rates. But that isnt obviously compatible with how healthcare works and the healthcare business works. It doesnt grow, you know, at 30% year over year or anything like that.And so how to make these things financially make sense for all comers. And its sort of part and parcel also with the problem that sometimes efficiency gains in healthcare just translate into heavier caseloads for doctors, which isnt obviously the best outcome either. And so in a way, I think its another aspect of the work on impact and trustworthiness when we think about technology, at all, in healthcare.MURRAY: Mm-hmm. I think thats right. And I think, you know, if you look at the difference between the AI scribe market and the rest of the summarization work thats largely happening within the electronic health record, in the AI scribe market, you have a lot of independent companies, and they all are competing to be the best. And so because of that, were seeing the technology get more efficient, cheaper. Theres just a lot of investment in that space.Whereas like with the electronic health record providers, theyre also invested in really providing us with these tools, but its not their main priority. Theyre delivering an entire electronic health record, and they also have to do it in a way that is affordable for, you know, all kinds of health systems, big UCSF health systems, smaller settings, and so theres a real tension, I think, between delivering good-enough tools and truly transformative tools.LEE: So I want to go back for a minute to this idea of cognitive burden that you described. When we talk about cognitive burden, its often in the context of paperwork, right. There are maybe referral letters, after-visit notes, all of these things. How do you see these AI tools progressing with respect to that stream of different administrative tasks?MURRAY: These tools are going to continue to be optimized to do more and more tasks for us. So with AI scribes, for example, you know, were starting to look at whether it can draft the billing and coding information for the clinician, which is a tedious task with many clicks.These tools are poised to start pending orders based on the conversation. Again, a tedious task. All of this with clinician oversight. But I think as we move from them being AI scribes to AI assistants, its going to be like a helper on the side for clinicians doing more and more work so they can really focus on the conversations, the shared decision-making, and the reason they went into medicine really.LEE: Yeah, let me, since you mentioned AI assistants and thats such an interesting word and it does connect with something that was apparent to us even, you know, as we were writing the book, which is this phenomenon that these AI systems might make mistakes.They might be guilty of making biased decisions or showing bias, and yet they at the same time seem incredibly effective at spotting other peoples mistakes or other peoples biased decisions. And so is there a point where these AI scribes do become AI assistants, that theyre sort of looking over a doctors shoulder and saying, Hey, did you think about something else? or Hey, you know, maybe youre wrong about a certain diagnosis?MURRAY: Mm-hmm. I mean, absolutely. Youre just really talking about combining technologies that already exist into a more streamlined clinical care experience, right. So you canand I already do this when Im on roundsIll kind of give the case to ChatGPT if its a complex case, and Ill say, Heres how Im thinking about it; are there other things? And itll give me additional ideas that are sometimes useful and sometimes not but often useful, and Ill integrate them into my conversation about the patient.I think all of these companies are thinking about that. You know, how do we integrate more clinical decision-making into the process? I think its just, you know, healthcare is always a little bit behind the technology industry in general, to say the least. And so its kind of one step at a time, and all of these use cases need a lot of validation. Theres regulatory issues, and so I think its going to take time for us to get there.LEE: Should I be impressed or concerned that the chief health AI officer at UC San Francisco Health is using ChatGPT off label?MURRAY: [LAUGHS] Well, I actually, every time I go on service, I encourage my residents to use it because I think we need to learn how to use these technologies. And, you know, when our medical education leaders start thinking about how do we teach students to use these, we dont know how to teach students to use them if were not using them ourselves, right. And so Ive learned a lot about what I perceive the strengths and limitations of the tools are.And I think but you know, one of the things that weve learned isand youve written about this in your bookbut the prompting really matters. And so I had a resident ask it for a differential for abnormal liver tests. But in asking for that differential, there is a key important blood finding, something called eosinophilia. Its a type of blood cell that was mildly, mildly elevated, and they didnt know it. So they didnt give it in the prompt, and as a result, they didnt get the right differential, but it wasnt actually ChatGPTs fault. It just didnt get the right information because the trainee didnt recognize the right information. And so I think theres a lot to learn as we practice using these tools clinically. So Im not ashamed of it. [LAUGHS]LEE: [LAUGHS] Yeah. Well, in fact, I think my coauthor Carey Goldberg would find what you said really validating because in our book, she actually wrote this fictional account of what it might be like in the future. And this medical resident was also using an AI chatbot off label for pretty much the same kinds of purposes. And its these kinds of things that, you know, it seems like might be coming next.MURRAY: I mean, medicine, the practice of medicine, is a very imperfect science, and so, you know, when we have a difficult case, I might sit in the workroom with my colleagues and run it by people. And everyone has different thoughts and opinions on, you know, things I should check for. And so I think this is just one other resource where you can kind of run cases, obviously, just reviewing all of the outputs yourself.LEE: All right, so were running short on time and so I want to be a little provocative at the end here. And since weve gotten into AI assistants, two questions: First off, do we get to a point in the near future when it would be unthinkable and maybe even bordering on malpractice for a doctor not to use AI assistants in his or her daily work?MURRAY: So its possible that we see that in the future. We dont see it right now. And thats part of the reason we dont force this on people. So we see AI scribes or AI assistants as a tool we offer to people to improve their daily work because we dont have sufficient data that the outcomes are markedly better from using these tools.I think there is a future where specific, you know, tools do actually improve outcomes. And then their use should be incentivized either through, you know, CMS [Centers for Medicare & Medicaid Services] or other systems to ensure that, you know, were delivering standard of care. But were not yet at the place where any of these tools are standard of care, which means they should be used to practice good medicine.LEE: And I think I would say that its the work of people like you that would make it possible for these things to become standard of care. And so now, final provocation. It must have crossed your mind through all of this, the possibility that AI might replace doctors in some ways. What are your thoughts?MURRAY: I think were a long way from that happening, honestly. And I think even when I talk to my colleagues in radiology about this, where I perceive, as an internist, they might be the most replaceable, theres a million reasons why thats not the case. And so I think these tools are going to augment our work. Theyre going to help us streamline access for patients. Theyre going to maybe change what clinicians have to do, but I dont think theyre going fully replace doctors. Theres just too much complexity and nuance in providing clinical care for these tools to do that work fully.LEE: Yeah, I think youre right. And actually, you know, I think theres plenty of evidence because in the history of modern medicine, we actually havent seen technology replace human doctors. Maybe you could say that we dont use barbers for bloodletting anymore because of technology. But I think, as you say, were at least a long ways away.MURRAY: Yeah.LEE: Sara, this has been just a great conversation. And thank you for the great work that youre doing, you know, and for being so open with us on your personal use of AI, but also how you see the adoption of AI in our health system.[TRANSITION MUSIC]MURRAY: Thank you, it was really great talking with you.LEE: I get so much out of talking to Sara. Every time, she manages to get me refocused on two things: the quality of the user experience and the importance of trust in any new technology that is brought into the clinic. I felt like there were several good takeaways from the conversation. One is that she really validated some predictions that Carey, Zak, and I made in our book, first and foremost, that automated note taking would be a highly desirable and practical reality. The other validation is Sara revealing that even she uses ChatGPT as a daily assistant in her clinical work, something that we guessed would happen in the book, but we werent really sure since health systems oftentimes are very locked down when it comes to the use of technological tools.And of course, maybe the biggest thing about Saras work is her role in defining a new type of job in healthcare, the health AI officer. This is something that Carey, Zak, and I didnt see coming at all, but in retrospect, makes all the sense in the world. Taken together, these two conversations really showed that we were on the right track in the book. AI has made its way into day-to-day life and work in the clinic, and both doctors and patients seem to be appreciating it.[MUSIC TRANSITIONS TO THEME]Id like to extend another big thank you to Chris and Sara for joining me on the show and sharing their insights. And to our listeners, thank you for coming along for the ride. We have some really great conversations planned for the coming episodes. Well delve into how patients are using generative AI for their own healthcare, the hype and reality of AI drug discovery, and more. We hope youll continue to tune in. Until next time.[MUSIC FADES]0 Комментарии ·0 Поделились ·17 Просмотры
-
Seed Oils or Animal Fats: What Is Healthiest to Cook With?www.discovermagazine.comWhen it comes time to whip up your favorite meal, one of the first items you'll grab is probably some form of vegetable oil or animal fat like a jug of canola oil or a tub of butter. But have you ever stopped to think about what option is the healthiest way to kickstart a recipe? The choices may seem overwhelming, and now, many consumers are embroiled in a hot debate over growing suspicion of seed oils.Critics of seed oils have claimed that the ingredients are toxic to the human body, influencing a slew of maladies from heart disease to weight gain. These arguments condemn the processing that the oils go through and tend to prop up animal fats like lard and beef tallow as better alternatives.What to believe, then? See the full picture of seed oils and how they compare to animal fats in terms of human health.How Are Seed Oils Processed?Seed oils come in a variety of forms, but those branded as part of whats called the hateful eight have come under the most scrutiny. This group includes canola, corn, cottonseed, grapeseed, rice bran, soybean, safflower, and sunflower oils.Skepticism of the hateful eight stems from the production process of many seed oils, which uses mechanical or chemical treatment. Edible oil processing commonly relies on hexane, a solvent used to extract oil from seeds after theyre crushed. Although high concentrations of hexane in its gaseous form can trigger mild nervous system effects (such as headaches and dizziness) through inhalation, the liquid solvent form used to extract oils is evaporated off until there is none left in seed oil, or extremely small trace amounts that dont have a toxic effect.Still, there have been calls to switch to alternative solutions like green solvents (like water or C02) or bio-based solvents (derived from crops) that would allay any safety concerns, on top of being more environmentally friendly.Another process used by some facilities is cold-pressing, when oil is pressed at a temperature below 49 C (120 F), often without the need for chemical solvents like hexane. This method has also been purported to retain more nutrients and bioactive compounds compared to refined oils. Read More: Is the Mediterranean Diet Healthy?The Effects of Omega-6 Fatty AcidsAnother claim that seed oil critics bring up relates to its omega-6 fatty acid content. Omega-6 fatty acids are an essential component of diets that lowers bad cholesterol levels (LDL) and raises good cholesterol levels (HDL), boosting heart health.Omega-6 fats, taking the form of linoleic acid in seed oils, have been blamed for inciting inflammation. This argument, however, does not tell the full story. It seems more likely that a skewed balance of omega-6 fats and omega-3 fats could be the reason people experience health problems; omega-3 fats, found mostly in fatty fish like salmon and mackerel, are even more essential than their omega-6 counterparts, but most people aren't consuming nearly enough of it. One 2023 review published in Nutrients states that the standard American diet comprises 14 to 25 times more omega-6 fatty acids than omega-3 fatty acids.It may be easy to pin the blame on omega-6 fats, but studies have shown that they have cardiovascular benefits. It appears the problem isnt that omega-6 fats are pro-inflammatory but rather that omega-3 fats are just noticeably more anti-inflammatory, and humans arent getting enough in their diets.Seed oils are also unfortunately paired with foods that are already ultra-processed, in which refined carbohydrates, sodium, and sugar are more to blame for health problems like weight gain.Animal Fats vs. Vegetable Oils Animal products that are high in saturated fats like butter, lard, and tallow have been shown to cause adverse health effects. While they do make food delicious, consuming too much poses the risk of heart disease and contributes to obesity.In small amounts, though, they can be safe to add to food.A 2021 study published in BMC Medicine found that consumption of butter and margarine was associated with higher total mortality, while canola oil and olive oil were linked with lower total mortality.Scientists came to a similar conclusion in a recent study published March 2025 in JAMA Internal Medicine, suggesting that substituting butter with plant-based oils may help prevent premature death.For those looking to find a healthy solution, olive oil stands out as a valuable choice. It is not a seed oil, and instead comes from the fleshy part of the ripened olive fruit that is pressed without the need for solvents. In addition to lowering LDL cholesterol and improving heart health, it contains key vitamins (like vitamins E and K) and minerals. Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:American Heart Association. There's no reason to avoid seed oils and plenty of reasons to eat themPenn State. Processing Edible OilsThe Nutrition Source. Ask the Expert: Concerns about canola oilHarvard Health Publishing. No need to avoid healthy omega-6 fatsStanford Medicine. 5 things to know about the effects of seed oils on healthPubMed. Cooking oil/fat consumption and deaths from cardiometabolic diseases and other causes: prospective analysis of 521,120 individualsJack Knudson is an assistant editor at Discover with a strong interest in environmental science and history. Before joining Discover in 2023, he studied journalism at the Scripps College of Communication at Ohio University and previously interned at Recycling Today magazine.0 Комментарии ·0 Поделились ·19 Просмотры
-
Why Can't We Remember Our Memories as a Baby, if we Make Them?www.discovermagazine.comTheres a lot to remember from your time as a baby your first smile, your first steps, your first words. But chances are, youve forgotten all of it, a phenomenon called infantile amnesia.For a long time, infantile amnesia was thought to be tied to an inability to make memories in infancy. But a new study supports the idea that babies do, indeed, encode memories in the first years of their lives, by linking measures of brain activity to measures of memory recall in infants for the first time.A New Notion of MemoryThough our days as infants are filled with new experiences, we dont remember those experiences later on in life. Researchers long thought that the hippocampus, the region of the brain thats in charge of making memories, wasnt developed enough during infancy to encode specific events as memories. But the results of the new study indicate that isnt true. Published in Science, the study involved 26 infants, all aged 4 months to 2 years old. At the start of the study, the infants brains were monitored as they were shown a series of images of faces, objects, or scenes. At the end, the infants were shown a previously seen image along with a new image to test their recall. Using functional magnetic resonance imaging (fMRI) to measure the babies brain activity as they looked at the images for the first time, the researchers revealed a link between hippocampal activation and memory recall, represented by the length of time that the infants looked at a previously seen image.If the infants hippocampal activity was higher when they saw an image for the first time, the study revealed, then they tended to look at the image longer when they saw it for a second time, indicating that heightened activity in the hippocampus had resulted in heightened recall.Revealed through an approach that mitigated the infants movements (which have previously posed a problem for fMRI readings), this pattern of hippocampal activity and recall rang true for all 26 infants. That said, the patterns were most apparent among infants 12 months old or older, providing researchers with a clearer picture of hippocampal ability throughout infancy. Read More: What Happens in Your Brain When You Make Memories?Making Memories and Testing ThemAccording to the study authors, measuring memory recall in infants is a tricky task. The hallmark of these types of memories, which we call episodic memories, is that you can describe them to others, but thats off the table when youre dealing with pre-verbal infants, said Nick Turk-Browne, a study author and a psychology professor at Yale, according to a press release. So, to measure memory recall, the study authors turned to a pre-verbal metric the amount of time that the infants stared at a previously seen image.When babies have seen something just once before, we expect them to look at it more when they see it again, Turk-Browne said in the release. In this task, if an infant stares at the previously seen image more than the new one next to it, that can be interpreted as the baby recognizing it as familiar.Missing Memories?Ultimately, the new research reveals that we make memories earlier than long thought. But if we do, indeed, make memories earlier, why do these memories disappear before we reach adulthood? The experiences of infancy may be saved as short-term memories but not as long-term memories and thus slip away as infants age into adults. It is also possible that these memories are stored somewhere out of reach in the brain, remaining there throughout adulthood. If the latter does turn out to be true, it would indicate that infantile amnesia is not an issue of memory making but of memory retrieval, Turke-Browne said in the release. Were working to track the durability of hippocampal memories across childhood, and even beginning to entertain the radical, almost sci-fi possibility that they may endure in some form into adulthood, despite being inaccessible.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Sam Walters is a journalist covering archaeology, paleontology, ecology, and evolution for Discover, along with an assortment of other topics. Before joining the Discover team as an assistant editor in 2022, Sam studied journalism at Northwestern University in Evanston, Illinois.0 Комментарии ·0 Поделились ·17 Просмотры