• How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    www.microsoft.com
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • 400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain Tumors

    Tumor Has ItJun 1, 10:00 AM EDT / by Noor Al-Sibai400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain TumorsThe pharmaceutical giant allegedly knew about the risks... but didn't warn patients.Jun 1, 10:00 AM EDT / Noor Al-SibaiImage by Beata Zawrzel / NurPhoto via Getty / FuturismRx/MedicinesRecent research has linked Pfizer's widely-used Depo-Provera birth control shot to massively increased risk of developing brain tumors — and hundreds of women are suing the pharmaceutical giant over it.According to a press release filed on behalf of the roughly 400 plaintiffs in the class action suit, the lawsuit claims that Pfizer and other companies that made generic versions of the injectable contraceptive knew of the link between the shot and the dangerous tumors, but didn't properly warn users.The suit follows a study published by the British Medical Journal last year that found that people who took the progestin-based shot for a year or more were up to 5.6 times more likely to develop meningioma, a slow-building brain tumor that forms, per the Cleveland Clinic, on the meninges, or layers of tissue that covers the brain and spinal cord.Though Pfizer attached warning labels about meningioma to Depo-Provera sold in Canada in 2015 and the UK, Europe, and South Africa after the 2024 study was published, no such label was deployed in the United States — a failure which according to the lawsuit is "inconsistentglobal safety standards."In an interview with the website DrugWatch, one of the suit's plaintiffs, who was identified by the initials TC, said that she had been "told how great Depo-Provera was" and decided to start it after an unplanned pregnancy that occurred when she'd taken the since-discontinued birth control pill Ortho Tri-Cyclen Lo."I thought it would be more reliable and convenient since I wouldn’t have to take it daily," TC told the site, referencing the four annual injections Depo-Provera requires. "I had no idea it would lead to such serious health problems."After being on the contraceptive shot for three years — and experiencing intense headaches, months-long uterine bleeding, and weight gain — the woman finally consulted her doctor and was diagnosed with meningioma. She's since been undergoing treatment and experienced some relief, but even that experience has been "physically and emotionally draining" because she has to get regular MRIs to monitor the tumor, which likely isn't fatal but still greatly affects her quality of life."It’s a constant worry that the tumor might grow," TC said, "and the appointments feel never-ending."That fear was echoed by others who spoke to the Daily Mail about their meningioma diagnoses after taking Depo-Provera. Unlike TC, Andrea Faulks of Alabama hadn't been on the shots for years when she learned of her brain tumors, which caused her years of anguish.Faulks told the British website that she'd begun taking the medication back in 1993, the year after it was approved by the FDA in the United States. She stopped taking it only a few years later, but spent decades having splitting headaches and experiencing dizziness and tremors. After being dismissed by no fewer than six doctors, the woman finally got an MRI last summer and learned that she had a brain tumor — and is now undergoing radiation to shrink it after all this time."I know this is something I'm going to have to live with for the rest of my life, as long as I live," Faulks told the Daily Mail.Currently, the class action case against Pfizer on behalf of women like Faulks and TC is in its earliest stages as attorneys representing those hundreds of women with brain tumors start working to make them whole.Even if they receive adequate payouts, however, that money won't take away their suffering, or give them back the years of their life lost to tumors they should have been warned about.Share This ArticleImage by Beata Zawrzel / NurPhoto via Getty / FuturismRead This Next
    #women #are #suing #pfizer #over
    400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain Tumors
    Tumor Has ItJun 1, 10:00 AM EDT / by Noor Al-Sibai400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain TumorsThe pharmaceutical giant allegedly knew about the risks... but didn't warn patients.Jun 1, 10:00 AM EDT / Noor Al-SibaiImage by Beata Zawrzel / NurPhoto via Getty / FuturismRx/MedicinesRecent research has linked Pfizer's widely-used Depo-Provera birth control shot to massively increased risk of developing brain tumors — and hundreds of women are suing the pharmaceutical giant over it.According to a press release filed on behalf of the roughly 400 plaintiffs in the class action suit, the lawsuit claims that Pfizer and other companies that made generic versions of the injectable contraceptive knew of the link between the shot and the dangerous tumors, but didn't properly warn users.The suit follows a study published by the British Medical Journal last year that found that people who took the progestin-based shot for a year or more were up to 5.6 times more likely to develop meningioma, a slow-building brain tumor that forms, per the Cleveland Clinic, on the meninges, or layers of tissue that covers the brain and spinal cord.Though Pfizer attached warning labels about meningioma to Depo-Provera sold in Canada in 2015 and the UK, Europe, and South Africa after the 2024 study was published, no such label was deployed in the United States — a failure which according to the lawsuit is "inconsistentglobal safety standards."In an interview with the website DrugWatch, one of the suit's plaintiffs, who was identified by the initials TC, said that she had been "told how great Depo-Provera was" and decided to start it after an unplanned pregnancy that occurred when she'd taken the since-discontinued birth control pill Ortho Tri-Cyclen Lo."I thought it would be more reliable and convenient since I wouldn’t have to take it daily," TC told the site, referencing the four annual injections Depo-Provera requires. "I had no idea it would lead to such serious health problems."After being on the contraceptive shot for three years — and experiencing intense headaches, months-long uterine bleeding, and weight gain — the woman finally consulted her doctor and was diagnosed with meningioma. She's since been undergoing treatment and experienced some relief, but even that experience has been "physically and emotionally draining" because she has to get regular MRIs to monitor the tumor, which likely isn't fatal but still greatly affects her quality of life."It’s a constant worry that the tumor might grow," TC said, "and the appointments feel never-ending."That fear was echoed by others who spoke to the Daily Mail about their meningioma diagnoses after taking Depo-Provera. Unlike TC, Andrea Faulks of Alabama hadn't been on the shots for years when she learned of her brain tumors, which caused her years of anguish.Faulks told the British website that she'd begun taking the medication back in 1993, the year after it was approved by the FDA in the United States. She stopped taking it only a few years later, but spent decades having splitting headaches and experiencing dizziness and tremors. After being dismissed by no fewer than six doctors, the woman finally got an MRI last summer and learned that she had a brain tumor — and is now undergoing radiation to shrink it after all this time."I know this is something I'm going to have to live with for the rest of my life, as long as I live," Faulks told the Daily Mail.Currently, the class action case against Pfizer on behalf of women like Faulks and TC is in its earliest stages as attorneys representing those hundreds of women with brain tumors start working to make them whole.Even if they receive adequate payouts, however, that money won't take away their suffering, or give them back the years of their life lost to tumors they should have been warned about.Share This ArticleImage by Beata Zawrzel / NurPhoto via Getty / FuturismRead This Next #women #are #suing #pfizer #over
    400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain Tumors
    futurism.com
    Tumor Has ItJun 1, 10:00 AM EDT / by Noor Al-Sibai400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain TumorsThe pharmaceutical giant allegedly knew about the risks... but didn't warn patients.Jun 1, 10:00 AM EDT / Noor Al-SibaiImage by Beata Zawrzel / NurPhoto via Getty / FuturismRx/MedicinesRecent research has linked Pfizer's widely-used Depo-Provera birth control shot to massively increased risk of developing brain tumors — and hundreds of women are suing the pharmaceutical giant over it.According to a press release filed on behalf of the roughly 400 plaintiffs in the class action suit, the lawsuit claims that Pfizer and other companies that made generic versions of the injectable contraceptive knew of the link between the shot and the dangerous tumors, but didn't properly warn users.The suit follows a study published by the British Medical Journal last year that found that people who took the progestin-based shot for a year or more were up to 5.6 times more likely to develop meningioma, a slow-building brain tumor that forms, per the Cleveland Clinic, on the meninges, or layers of tissue that covers the brain and spinal cord.Though Pfizer attached warning labels about meningioma to Depo-Provera sold in Canada in 2015 and the UK, Europe, and South Africa after the 2024 study was published, no such label was deployed in the United States — a failure which according to the lawsuit is "inconsistent [with] global safety standards."In an interview with the website DrugWatch, one of the suit's plaintiffs, who was identified by the initials TC, said that she had been "told how great Depo-Provera was" and decided to start it after an unplanned pregnancy that occurred when she'd taken the since-discontinued birth control pill Ortho Tri-Cyclen Lo."I thought it would be more reliable and convenient since I wouldn’t have to take it daily," TC told the site, referencing the four annual injections Depo-Provera requires. "I had no idea it would lead to such serious health problems."After being on the contraceptive shot for three years — and experiencing intense headaches, months-long uterine bleeding, and weight gain — the woman finally consulted her doctor and was diagnosed with meningioma. She's since been undergoing treatment and experienced some relief, but even that experience has been "physically and emotionally draining" because she has to get regular MRIs to monitor the tumor, which likely isn't fatal but still greatly affects her quality of life."It’s a constant worry that the tumor might grow," TC said, "and the appointments feel never-ending."That fear was echoed by others who spoke to the Daily Mail about their meningioma diagnoses after taking Depo-Provera. Unlike TC, Andrea Faulks of Alabama hadn't been on the shots for years when she learned of her brain tumors, which caused her years of anguish.Faulks told the British website that she'd begun taking the medication back in 1993, the year after it was approved by the FDA in the United States. She stopped taking it only a few years later, but spent decades having splitting headaches and experiencing dizziness and tremors. After being dismissed by no fewer than six doctors, the woman finally got an MRI last summer and learned that she had a brain tumor — and is now undergoing radiation to shrink it after all this time."I know this is something I'm going to have to live with for the rest of my life, as long as I live," Faulks told the Daily Mail.Currently, the class action case against Pfizer on behalf of women like Faulks and TC is in its earliest stages as attorneys representing those hundreds of women with brain tumors start working to make them whole.Even if they receive adequate payouts, however, that money won't take away their suffering, or give them back the years of their life lost to tumors they should have been warned about.Share This ArticleImage by Beata Zawrzel / NurPhoto via Getty / FuturismRead This Next
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • RFK Jr. is looking in the wrong place for autism’s cause

    Let’s start with one unambiguous fact: More children are diagnosed with autism today than in the early 1990s. According to a sweeping 2000 analysis by the Centers for Disease Control and Prevention, a range of 2–7 per 1,000, or roughly 0.5 percent of US children, were diagnosed with autism in the 1990s. That figure has risen to 1 in 35 kids, or roughly 3 percent.The apparent rapid increase caught the attention of people like Robert F. Kennedy Jr., who assumed that something had to be changing in the environment to drive it. In 2005, Kennedy, a lawyer and environmental activist at the time, authored an infamous essay in Rolling Stone that primarily placed the blame for the increased prevalence of autism on vaccines.More recently, he has theorized that a mysterious toxin introduced in the late 1980s must be responsible. Now, as the nation’s top health official leading the Department of Health and Human Services, Kennedy has declared autism an “epidemic.” And, in April, he launched a massive federal effort to find the culprit for the rise in autism rates, calling for researchers to examine a range of suspects: chemicals, molds, vaccines, and perhaps even ultrasounds given to pregnant mothers. “Genes don’t cause epidemics. You need an environmental toxin,” Kennedy said in April when announcing his department’s new autism research project. He argued that too much money had been put into genetic research — “a dead end,” in his words — and his project would be a correction to focus on environmental causes. “That’s where we’re going to find an answer.”But according to many autism scientists I spoke to for this story, Kennedy is looking in exactly the wrong place. Three takeaways from this storyExperts say the increase in US autism rates is mostly explained by the expanding definitions of the condition, as well as more awareness and more screening for it.Scientists have identified hundreds of genes that are associated with autism, building a convincing case that genetics are the most important driver of autism’s development — not, as Health Secretary Robert F. Kennedy Jr. has argued, a single environmental toxin.Researchers fear Kennedy’s fixation on outside toxins could distract from genetic research that has facilitated the development of exciting new therapies that could help those with profound autism.Autism is a complex disorder with a range of manifestations that has long defied simple explanations, and it’s unlikely that we will ever identify a single “cause” of autism.But scientists have learned a lot in the past 50 years, including identifying some of the most important risk factors. They are not, as Kennedy suggests, out in our environment. They are written into our genetics. What appeared to be a massive increase in autism was actually a byproduct of better screening and more awareness. “The way the HHS secretary has been walking about his plans, his goals, he starts out with this basic assumption that nothing worthwhile has been done,” Helen Tager-Flusberg, a psychologist at Boston University who has worked with and studied children with autism for years, said. “Genes play a significant role. We know now that autism runs in families… There is no single underlying factor. Looking for that holy grail is not the best approach.”Doctors who treat children with autism often talk about how they wish they could provide easy answers to the families. The answers being uncovered through genetics research may not be simple per se, but they are answers supported by science.Kennedy is muddying the story, pledging to find a silver-bullet answer where likely none exists. It’s a false promise — one that could cause more anxiety and confusion for the very families Kennedy says he wants to help. Robert F. Kennedy Jr. speaks during a news conference at the Department of Health and Human Services in mid-April to discuss this agency’s efforts to determine the cause of autism. Alex Wong/Getty ImagesThe autism “epidemic” that wasn’tAutism was first described in 1911, and for many decades, researchers and clinicians confused the social challenges and language development difficulties common among those with the condition for a psychological issue. Some child therapists even blamed the condition on bad parenting. But in 1977, a study discovered that identical twins, who share all of their DNA, were much more likely to both be autistic than fraternal twins, who share no more DNA than ordinary siblings. It marked a major breakthrough in autism research, and pushed scientists to begin coalescing around a different theory: There was a biological factor.At the time, this was just a theory — scientists lacked the technology to prove those suspicions at the genetic level. And clinicians were also still trying to work out an even more fundamental question: What exactly was autism? For a long time, the criteria for diagnosing a person with autism was strictly based on speech development. But clinicians were increasingly observing children who could acquire basic language skills but still struggled with social communication — things like misunderstanding nonverbal cues or taking figurative language literally. Psychologists gradually broadened their definition of autism from a strict and narrow focus on language, culminating in a 2013 criteria that included a wide range of social and emotional symptoms with three subtypes — the autism spectrum disorder we’re familiar with today.Along the way, autism had evolved from a niche diagnosis for the severely impaired to something that encompassed far more children. It makes sense then, that as the broad criteria for autism expanded, more and more children would meet it, and autism rates would rise. That’s precisely what happened. And it means that the “epidemic” that Kennedy and other activists have been fixated on is mostly a diagnostic mirage. Historical autism data is spotty and subject to these same historical biases, but if you look at the prevalence of profound autism alone — those who need the highest levels of support — a clearer picture emerges.In the ’80s and ’90s, low-support needs individuals would have been less likely to receive an autism diagnosis given the more restrictive criteria and less overall awareness of the disorder, meaning that people with severe autism likely represented most of the roughly 0.5 percent of children diagnosed with autism in the 1990s.By 2025, when about 3 percent of children are being diagnosed with autism, about one in four of those diagnosed are considered to have high-support needs autism, those with most severe manifestation of the condition. That would equal about 0.8 percent of all US children — which would be a fairly marginal increase from autism rates 30 years ago. Or look at it another way: In 2000, as many as 60 percent of the people being diagnosed with autism had an intellectual disability, one of the best indicators of high-support needs autism. In 2022, that percentage was less than 40 percent.As a recently published CDC report on autism prevalence among young children concluded, the increase in autism rates can largely be accounted for by stronger surveillance and more awareness among providers and parents, rather than a novel toxin or some other external factor driving an increase in cases.Other known risk factors — like more people now having babies later in their life, given that parental age is linked to a higher likelihood of autism — are more likely to be a factor than anything Kennedy is pointing at, experts say. “It’s very clear it’s not going to be one environmental toxin,” said Alison Singer, founder of the Autism Science Foundation and parent of a child with profound autism. “If there were a smoking gun, I think they would have found it.”While Kennedy has fixated on vaccines and environmental influences, scientists have gained more precision in mapping human genetics and identifying the biological mechanisms that appear to be a primary cause of autism. And that not only helps us understand why autism develops, but potentially puts long-elusive therapies within reach. It began with an accident in the 1990s. Steven Scherer, now director of the Center for Applied Genomics at the Hospital for Sick Children in Toronto, began his career in the late 1980s trying to identify the gene that caused cystic fibrosis — in collaboration with Francis Collins, who went on to lead the Human Genome Project that successfully sequenced all of the DNA in the human genome in the early 2000s. Scherer and Collins’s teams focused on chromosome 7, identified as a likely target by the primitive genetic research available at the time, a coincidence that would reorient Scherer’s career just a few years later, putting him on the trail of autism’s genetic roots.After four years, the researchers concluded that one gene within chromosome 7 caused cystic fibrosis. Soon after Scherer helped crack the code on cystic fibrosis in the mid-1990s, two parents from California called him: He was the world’s leading expert on chromosome 7, and recent tests had revealed that their children with autism had a problem within that particular chromosome.That very same week, Scherer says, he read the findings of a study by a group at Oxford University, which had looked at the chromosomes of families with two or more kids with autism. They, too, had identified problems within chromosome 7.“So I said, ‘Okay, we’re going to work on autism,’” Scherer told me. He helped coordinate a global research project, uniting his Canadian lab with the Oxford team and groups in the US to run a database that became the Autism Genome Project, still the world’s largest repository of genetic information of people with autism.They had a starting point — one chromosome — but a given chromosome contains hundreds of genes. And humans have, of course, 45 other chromosomes, any of which conceivably might play a role. So over the years, they collected DNA samples from thousands upon thousands of people with autism, sequenced their genes, and then searched for patterns. If the same gene is mutated or missing across a high percentage of autistic people, it goes on the list as potentially associated with the condition. Scientists discovered that autism has not one genetic factor, but many — further evidence that this is a condition of complex origin, in which multiple variables likely play a role in its development, rather than one caused by a single genetic error like sickle-cell anemia.Here is one way to think about how far we have come: Joseph Buxbaum, the director of the Seaver Autism Center for Research and Treatment at the Icahn School of Medicine at Mount Sinai in New York, entered autism genetics research 35 years ago. He recalls scientists being hopeful that they might identify a half dozen or so genes linked to autism.They have now found 500 genes — and Buxbaum told me he believed they might find a thousand before they are through. These genetic factors continue to prove their value in predicting the onset of autism: Scherer pointed to one recent study in which the researchers identified people who all shared a mutation in the SHANK3 gene, one of the first to be associated with autism, but who were otherwise unalike: They were not related and came from different demographic backgrounds. Nevertheless, they had all been diagnosed with autism.Researchers analyze the brain activity of a 14-year-old boy with autism as part of a University of California San Francisco study that involves intensive brain imaging of kids and their parents who have a rare chromosome disruption connected to autism. The study, the Simons Variation in Individuals Project, is a genetics-first approach to studying autism spectrum and related neurodevelopmental disorders. Michael Macor/San Francisco Chronicle via The Associated PressPrecisely how much genetics contributes to the development of autism remains the subject of ongoing study. By analyzing millions of children with autism and their parents for patterns in diagnoses, multiple studies have attributed about 80 percent of a person’s risk of developing autism to their inherited genetic factors. But of course 80 percent is not 100 percent. We don’t yet have the full picture of how or why autism develops. Among identical twins, for example, studies have found that in most cases, if one twin has high-support needs autism, the other does as well, affirming the genetic effect. But there are consistently a small minority of cases — 5 and 10 percent of twin pairs, Scherer told me — in which one twin has relatively low-support needs while the one requires a a high degree of support for their autism.Kennedy is not wholly incorrect to look at environmental factors — researchers theorize that autism may be the result of a complex interaction between a person’s genetics and something they experience in utero. Scientists in autism research are exploring the possible influence when, for example, a person’s mother develops maternal diabetes, high blood sugar that persists throughout pregnancy. And yet even if these other factors do play some role, the researchers I spoke to agree that genetics is, based on what we know now, far and away the most important driver.“We need to figure out how other types of genetics and also environmental factors affect autism’s development,” Scherer said. “There could be environmental changes…involved in some people, but it’s going to be based on their genetics and the pathways that lead them to be susceptible.”While the precise contours of Health Department’s new autism research project is still taking shape, Kennedy has that researchers at the National Institutes of Health will collect data from federal programs such as Medicare and Medicaid and somehow use that information to identify possible environmental exposures that lead to autism. He initially pledged results by September, a timeline that, as outside experts pointed out, may be too fast to allow for a thorough and thoughtful review of the research literature. Kennedy has since backed off on that deadline, promising some initial findings in the fall but with more to come next year.RFK Jr.’s autism commission research risks the accessibility of groundbreaking autism treatmentsIf Kennedy were serious about moving autism science forward, he would be talking more about genetics, not dismissing them. That’s because genetics is where all of the exciting drug development is currently happening.A biotech firm called Jaguar Gene Therapy has received FDA approval to conduct the first clinical trial of a gene therapy for autism, focused on SHANK3. The treatment, developed in part by one of Buxbaum’s colleagues, is a one-time injection that would replace a mutated or missing SHANK3 gene with a functional one. The hope is that the therapy would improve speech and other symptoms among people with high-needs autism who have also been diagnosed with a rare chromosomal deletion disorder called Phelan-McDermid syndrome; many people with this condition also have Autism spectrum disorder.The trial will begin this year with a few infant patients, 2 years old and younger, who have been diagnosed with autism. Jaguar eventually aims to test the therapy on adults over 18 with autism in the future. Patients are supposed to start enrolling this year in the trial, which is focused on first establishing the treatment’s safety; if it proves safe, another round of trials would start to rigorously evaluate its effectiveness.“This is the stuff that three or four years ago sounded like science fiction,” Singer said. “The conversation has really changed from Is this possible? to What are the best methods to do it? And that’s based on genetics.”Researchers at Mount Sinai have also experimented with delivering lithium to patients and seeing if it improves their SHANK3 function. Other gene therapies targeting other genes are in earlier stages of development. Some investigators are experimenting with CRISPR technology, the revolutionary new platform for gene editing, to target the problematic genes that correspond to the onset of autism.But these scientists fear that their work could be slowed by Kennedy’s insistence on hunting for environmental toxins, if federal dollars are instead shifted into his new project. They are already trying to subsist amid deep budget cuts across the many funding streams that support the institutions where they work. “Now we have this massive disruption where instead of doing really key experiments, people are worrying about paying their bills and laying off their staff and things,” Scherer said. “It’s horrible.” For the families of people with high-needs autism, Kennedy’s crusade has stirred conflicting emotions. Alison Singer, the leader of the Autism Science Foundation, is also the parent of a child with profound autism. When I spoke with her, I was struck by the bind that Kennedy’s rhetoric has put people like her and her family in. Singer told me profound autism has not received enough federal support in the past, as more emphasis was placed on individuals who have low support needs included in the expanding definitions of the disorder, and so she appreciates Kennedy giving voice to those families. She believes that he is sincerely empathetic toward their predicament and their feeling that the mainstream discussion about autism has for too long ignored their experiences in favor of patients with lower support needs. But she worries that his obsession with environmental factors will stymie the research that could yield breakthroughs for people like her child.“He feels for those families and genuinely wants to help them,” Singer said. “The problem is he is a data denier. You can’t be so entrenched in your beliefs that you can’t see the data right in front of you. That’s not science.”See More:
    #rfk #looking #wrong #place #autisms
    RFK Jr. is looking in the wrong place for autism’s cause
    Let’s start with one unambiguous fact: More children are diagnosed with autism today than in the early 1990s. According to a sweeping 2000 analysis by the Centers for Disease Control and Prevention, a range of 2–7 per 1,000, or roughly 0.5 percent of US children, were diagnosed with autism in the 1990s. That figure has risen to 1 in 35 kids, or roughly 3 percent.The apparent rapid increase caught the attention of people like Robert F. Kennedy Jr., who assumed that something had to be changing in the environment to drive it. In 2005, Kennedy, a lawyer and environmental activist at the time, authored an infamous essay in Rolling Stone that primarily placed the blame for the increased prevalence of autism on vaccines.More recently, he has theorized that a mysterious toxin introduced in the late 1980s must be responsible. Now, as the nation’s top health official leading the Department of Health and Human Services, Kennedy has declared autism an “epidemic.” And, in April, he launched a massive federal effort to find the culprit for the rise in autism rates, calling for researchers to examine a range of suspects: chemicals, molds, vaccines, and perhaps even ultrasounds given to pregnant mothers. “Genes don’t cause epidemics. You need an environmental toxin,” Kennedy said in April when announcing his department’s new autism research project. He argued that too much money had been put into genetic research — “a dead end,” in his words — and his project would be a correction to focus on environmental causes. “That’s where we’re going to find an answer.”But according to many autism scientists I spoke to for this story, Kennedy is looking in exactly the wrong place. Three takeaways from this storyExperts say the increase in US autism rates is mostly explained by the expanding definitions of the condition, as well as more awareness and more screening for it.Scientists have identified hundreds of genes that are associated with autism, building a convincing case that genetics are the most important driver of autism’s development — not, as Health Secretary Robert F. Kennedy Jr. has argued, a single environmental toxin.Researchers fear Kennedy’s fixation on outside toxins could distract from genetic research that has facilitated the development of exciting new therapies that could help those with profound autism.Autism is a complex disorder with a range of manifestations that has long defied simple explanations, and it’s unlikely that we will ever identify a single “cause” of autism.But scientists have learned a lot in the past 50 years, including identifying some of the most important risk factors. They are not, as Kennedy suggests, out in our environment. They are written into our genetics. What appeared to be a massive increase in autism was actually a byproduct of better screening and more awareness. “The way the HHS secretary has been walking about his plans, his goals, he starts out with this basic assumption that nothing worthwhile has been done,” Helen Tager-Flusberg, a psychologist at Boston University who has worked with and studied children with autism for years, said. “Genes play a significant role. We know now that autism runs in families… There is no single underlying factor. Looking for that holy grail is not the best approach.”Doctors who treat children with autism often talk about how they wish they could provide easy answers to the families. The answers being uncovered through genetics research may not be simple per se, but they are answers supported by science.Kennedy is muddying the story, pledging to find a silver-bullet answer where likely none exists. It’s a false promise — one that could cause more anxiety and confusion for the very families Kennedy says he wants to help. Robert F. Kennedy Jr. speaks during a news conference at the Department of Health and Human Services in mid-April to discuss this agency’s efforts to determine the cause of autism. Alex Wong/Getty ImagesThe autism “epidemic” that wasn’tAutism was first described in 1911, and for many decades, researchers and clinicians confused the social challenges and language development difficulties common among those with the condition for a psychological issue. Some child therapists even blamed the condition on bad parenting. But in 1977, a study discovered that identical twins, who share all of their DNA, were much more likely to both be autistic than fraternal twins, who share no more DNA than ordinary siblings. It marked a major breakthrough in autism research, and pushed scientists to begin coalescing around a different theory: There was a biological factor.At the time, this was just a theory — scientists lacked the technology to prove those suspicions at the genetic level. And clinicians were also still trying to work out an even more fundamental question: What exactly was autism? For a long time, the criteria for diagnosing a person with autism was strictly based on speech development. But clinicians were increasingly observing children who could acquire basic language skills but still struggled with social communication — things like misunderstanding nonverbal cues or taking figurative language literally. Psychologists gradually broadened their definition of autism from a strict and narrow focus on language, culminating in a 2013 criteria that included a wide range of social and emotional symptoms with three subtypes — the autism spectrum disorder we’re familiar with today.Along the way, autism had evolved from a niche diagnosis for the severely impaired to something that encompassed far more children. It makes sense then, that as the broad criteria for autism expanded, more and more children would meet it, and autism rates would rise. That’s precisely what happened. And it means that the “epidemic” that Kennedy and other activists have been fixated on is mostly a diagnostic mirage. Historical autism data is spotty and subject to these same historical biases, but if you look at the prevalence of profound autism alone — those who need the highest levels of support — a clearer picture emerges.In the ’80s and ’90s, low-support needs individuals would have been less likely to receive an autism diagnosis given the more restrictive criteria and less overall awareness of the disorder, meaning that people with severe autism likely represented most of the roughly 0.5 percent of children diagnosed with autism in the 1990s.By 2025, when about 3 percent of children are being diagnosed with autism, about one in four of those diagnosed are considered to have high-support needs autism, those with most severe manifestation of the condition. That would equal about 0.8 percent of all US children — which would be a fairly marginal increase from autism rates 30 years ago. Or look at it another way: In 2000, as many as 60 percent of the people being diagnosed with autism had an intellectual disability, one of the best indicators of high-support needs autism. In 2022, that percentage was less than 40 percent.As a recently published CDC report on autism prevalence among young children concluded, the increase in autism rates can largely be accounted for by stronger surveillance and more awareness among providers and parents, rather than a novel toxin or some other external factor driving an increase in cases.Other known risk factors — like more people now having babies later in their life, given that parental age is linked to a higher likelihood of autism — are more likely to be a factor than anything Kennedy is pointing at, experts say. “It’s very clear it’s not going to be one environmental toxin,” said Alison Singer, founder of the Autism Science Foundation and parent of a child with profound autism. “If there were a smoking gun, I think they would have found it.”While Kennedy has fixated on vaccines and environmental influences, scientists have gained more precision in mapping human genetics and identifying the biological mechanisms that appear to be a primary cause of autism. And that not only helps us understand why autism develops, but potentially puts long-elusive therapies within reach. It began with an accident in the 1990s. Steven Scherer, now director of the Center for Applied Genomics at the Hospital for Sick Children in Toronto, began his career in the late 1980s trying to identify the gene that caused cystic fibrosis — in collaboration with Francis Collins, who went on to lead the Human Genome Project that successfully sequenced all of the DNA in the human genome in the early 2000s. Scherer and Collins’s teams focused on chromosome 7, identified as a likely target by the primitive genetic research available at the time, a coincidence that would reorient Scherer’s career just a few years later, putting him on the trail of autism’s genetic roots.After four years, the researchers concluded that one gene within chromosome 7 caused cystic fibrosis. Soon after Scherer helped crack the code on cystic fibrosis in the mid-1990s, two parents from California called him: He was the world’s leading expert on chromosome 7, and recent tests had revealed that their children with autism had a problem within that particular chromosome.That very same week, Scherer says, he read the findings of a study by a group at Oxford University, which had looked at the chromosomes of families with two or more kids with autism. They, too, had identified problems within chromosome 7.“So I said, ‘Okay, we’re going to work on autism,’” Scherer told me. He helped coordinate a global research project, uniting his Canadian lab with the Oxford team and groups in the US to run a database that became the Autism Genome Project, still the world’s largest repository of genetic information of people with autism.They had a starting point — one chromosome — but a given chromosome contains hundreds of genes. And humans have, of course, 45 other chromosomes, any of which conceivably might play a role. So over the years, they collected DNA samples from thousands upon thousands of people with autism, sequenced their genes, and then searched for patterns. If the same gene is mutated or missing across a high percentage of autistic people, it goes on the list as potentially associated with the condition. Scientists discovered that autism has not one genetic factor, but many — further evidence that this is a condition of complex origin, in which multiple variables likely play a role in its development, rather than one caused by a single genetic error like sickle-cell anemia.Here is one way to think about how far we have come: Joseph Buxbaum, the director of the Seaver Autism Center for Research and Treatment at the Icahn School of Medicine at Mount Sinai in New York, entered autism genetics research 35 years ago. He recalls scientists being hopeful that they might identify a half dozen or so genes linked to autism.They have now found 500 genes — and Buxbaum told me he believed they might find a thousand before they are through. These genetic factors continue to prove their value in predicting the onset of autism: Scherer pointed to one recent study in which the researchers identified people who all shared a mutation in the SHANK3 gene, one of the first to be associated with autism, but who were otherwise unalike: They were not related and came from different demographic backgrounds. Nevertheless, they had all been diagnosed with autism.Researchers analyze the brain activity of a 14-year-old boy with autism as part of a University of California San Francisco study that involves intensive brain imaging of kids and their parents who have a rare chromosome disruption connected to autism. The study, the Simons Variation in Individuals Project, is a genetics-first approach to studying autism spectrum and related neurodevelopmental disorders. Michael Macor/San Francisco Chronicle via The Associated PressPrecisely how much genetics contributes to the development of autism remains the subject of ongoing study. By analyzing millions of children with autism and their parents for patterns in diagnoses, multiple studies have attributed about 80 percent of a person’s risk of developing autism to their inherited genetic factors. But of course 80 percent is not 100 percent. We don’t yet have the full picture of how or why autism develops. Among identical twins, for example, studies have found that in most cases, if one twin has high-support needs autism, the other does as well, affirming the genetic effect. But there are consistently a small minority of cases — 5 and 10 percent of twin pairs, Scherer told me — in which one twin has relatively low-support needs while the one requires a a high degree of support for their autism.Kennedy is not wholly incorrect to look at environmental factors — researchers theorize that autism may be the result of a complex interaction between a person’s genetics and something they experience in utero. Scientists in autism research are exploring the possible influence when, for example, a person’s mother develops maternal diabetes, high blood sugar that persists throughout pregnancy. And yet even if these other factors do play some role, the researchers I spoke to agree that genetics is, based on what we know now, far and away the most important driver.“We need to figure out how other types of genetics and also environmental factors affect autism’s development,” Scherer said. “There could be environmental changes…involved in some people, but it’s going to be based on their genetics and the pathways that lead them to be susceptible.”While the precise contours of Health Department’s new autism research project is still taking shape, Kennedy has that researchers at the National Institutes of Health will collect data from federal programs such as Medicare and Medicaid and somehow use that information to identify possible environmental exposures that lead to autism. He initially pledged results by September, a timeline that, as outside experts pointed out, may be too fast to allow for a thorough and thoughtful review of the research literature. Kennedy has since backed off on that deadline, promising some initial findings in the fall but with more to come next year.RFK Jr.’s autism commission research risks the accessibility of groundbreaking autism treatmentsIf Kennedy were serious about moving autism science forward, he would be talking more about genetics, not dismissing them. That’s because genetics is where all of the exciting drug development is currently happening.A biotech firm called Jaguar Gene Therapy has received FDA approval to conduct the first clinical trial of a gene therapy for autism, focused on SHANK3. The treatment, developed in part by one of Buxbaum’s colleagues, is a one-time injection that would replace a mutated or missing SHANK3 gene with a functional one. The hope is that the therapy would improve speech and other symptoms among people with high-needs autism who have also been diagnosed with a rare chromosomal deletion disorder called Phelan-McDermid syndrome; many people with this condition also have Autism spectrum disorder.The trial will begin this year with a few infant patients, 2 years old and younger, who have been diagnosed with autism. Jaguar eventually aims to test the therapy on adults over 18 with autism in the future. Patients are supposed to start enrolling this year in the trial, which is focused on first establishing the treatment’s safety; if it proves safe, another round of trials would start to rigorously evaluate its effectiveness.“This is the stuff that three or four years ago sounded like science fiction,” Singer said. “The conversation has really changed from Is this possible? to What are the best methods to do it? And that’s based on genetics.”Researchers at Mount Sinai have also experimented with delivering lithium to patients and seeing if it improves their SHANK3 function. Other gene therapies targeting other genes are in earlier stages of development. Some investigators are experimenting with CRISPR technology, the revolutionary new platform for gene editing, to target the problematic genes that correspond to the onset of autism.But these scientists fear that their work could be slowed by Kennedy’s insistence on hunting for environmental toxins, if federal dollars are instead shifted into his new project. They are already trying to subsist amid deep budget cuts across the many funding streams that support the institutions where they work. “Now we have this massive disruption where instead of doing really key experiments, people are worrying about paying their bills and laying off their staff and things,” Scherer said. “It’s horrible.” For the families of people with high-needs autism, Kennedy’s crusade has stirred conflicting emotions. Alison Singer, the leader of the Autism Science Foundation, is also the parent of a child with profound autism. When I spoke with her, I was struck by the bind that Kennedy’s rhetoric has put people like her and her family in. Singer told me profound autism has not received enough federal support in the past, as more emphasis was placed on individuals who have low support needs included in the expanding definitions of the disorder, and so she appreciates Kennedy giving voice to those families. She believes that he is sincerely empathetic toward their predicament and their feeling that the mainstream discussion about autism has for too long ignored their experiences in favor of patients with lower support needs. But she worries that his obsession with environmental factors will stymie the research that could yield breakthroughs for people like her child.“He feels for those families and genuinely wants to help them,” Singer said. “The problem is he is a data denier. You can’t be so entrenched in your beliefs that you can’t see the data right in front of you. That’s not science.”See More: #rfk #looking #wrong #place #autisms
    RFK Jr. is looking in the wrong place for autism’s cause
    www.vox.com
    Let’s start with one unambiguous fact: More children are diagnosed with autism today than in the early 1990s. According to a sweeping 2000 analysis by the Centers for Disease Control and Prevention, a range of 2–7 per 1,000, or roughly 0.5 percent of US children, were diagnosed with autism in the 1990s. That figure has risen to 1 in 35 kids, or roughly 3 percent.The apparent rapid increase caught the attention of people like Robert F. Kennedy Jr., who assumed that something had to be changing in the environment to drive it. In 2005, Kennedy, a lawyer and environmental activist at the time, authored an infamous essay in Rolling Stone that primarily placed the blame for the increased prevalence of autism on vaccines. (The article was retracted in 2011 as more studies debunked the vaccine-autism connection.) More recently, he has theorized that a mysterious toxin introduced in the late 1980s must be responsible. Now, as the nation’s top health official leading the Department of Health and Human Services, Kennedy has declared autism an “epidemic.” And, in April, he launched a massive federal effort to find the culprit for the rise in autism rates, calling for researchers to examine a range of suspects: chemicals, molds, vaccines, and perhaps even ultrasounds given to pregnant mothers. “Genes don’t cause epidemics. You need an environmental toxin,” Kennedy said in April when announcing his department’s new autism research project. He argued that too much money had been put into genetic research — “a dead end,” in his words — and his project would be a correction to focus on environmental causes. “That’s where we’re going to find an answer.”But according to many autism scientists I spoke to for this story, Kennedy is looking in exactly the wrong place. Three takeaways from this storyExperts say the increase in US autism rates is mostly explained by the expanding definitions of the condition, as well as more awareness and more screening for it.Scientists have identified hundreds of genes that are associated with autism, building a convincing case that genetics are the most important driver of autism’s development — not, as Health Secretary Robert F. Kennedy Jr. has argued, a single environmental toxin.Researchers fear Kennedy’s fixation on outside toxins could distract from genetic research that has facilitated the development of exciting new therapies that could help those with profound autism.Autism is a complex disorder with a range of manifestations that has long defied simple explanations, and it’s unlikely that we will ever identify a single “cause” of autism.But scientists have learned a lot in the past 50 years, including identifying some of the most important risk factors. They are not, as Kennedy suggests, out in our environment. They are written into our genetics. What appeared to be a massive increase in autism was actually a byproduct of better screening and more awareness. “The way the HHS secretary has been walking about his plans, his goals, he starts out with this basic assumption that nothing worthwhile has been done,” Helen Tager-Flusberg, a psychologist at Boston University who has worked with and studied children with autism for years, said. “Genes play a significant role. We know now that autism runs in families… There is no single underlying factor. Looking for that holy grail is not the best approach.”Doctors who treat children with autism often talk about how they wish they could provide easy answers to the families. The answers being uncovered through genetics research may not be simple per se, but they are answers supported by science.Kennedy is muddying the story, pledging to find a silver-bullet answer where likely none exists. It’s a false promise — one that could cause more anxiety and confusion for the very families Kennedy says he wants to help. Robert F. Kennedy Jr. speaks during a news conference at the Department of Health and Human Services in mid-April to discuss this agency’s efforts to determine the cause of autism. Alex Wong/Getty ImagesThe autism “epidemic” that wasn’tAutism was first described in 1911, and for many decades, researchers and clinicians confused the social challenges and language development difficulties common among those with the condition for a psychological issue. Some child therapists even blamed the condition on bad parenting. But in 1977, a study discovered that identical twins, who share all of their DNA, were much more likely to both be autistic than fraternal twins, who share no more DNA than ordinary siblings. It marked a major breakthrough in autism research, and pushed scientists to begin coalescing around a different theory: There was a biological factor.At the time, this was just a theory — scientists lacked the technology to prove those suspicions at the genetic level. And clinicians were also still trying to work out an even more fundamental question: What exactly was autism? For a long time, the criteria for diagnosing a person with autism was strictly based on speech development. But clinicians were increasingly observing children who could acquire basic language skills but still struggled with social communication — things like misunderstanding nonverbal cues or taking figurative language literally. Psychologists gradually broadened their definition of autism from a strict and narrow focus on language, culminating in a 2013 criteria that included a wide range of social and emotional symptoms with three subtypes — the autism spectrum disorder we’re familiar with today.Along the way, autism had evolved from a niche diagnosis for the severely impaired to something that encompassed far more children. It makes sense then, that as the broad criteria for autism expanded, more and more children would meet it, and autism rates would rise. That’s precisely what happened. And it means that the “epidemic” that Kennedy and other activists have been fixated on is mostly a diagnostic mirage. Historical autism data is spotty and subject to these same historical biases, but if you look at the prevalence of profound autism alone — those who need the highest levels of support — a clearer picture emerges. (There is an ongoing debate in the autism community about whether to use the terminology of “profound autism” or “high support needs” for those who have the most severe form of the condition.) In the ’80s and ’90s, low-support needs individuals would have been less likely to receive an autism diagnosis given the more restrictive criteria and less overall awareness of the disorder, meaning that people with severe autism likely represented most of the roughly 0.5 percent of children diagnosed with autism in the 1990s. (One large analysis from Atlanta examining data from 1996 found that 68 percent of kids ages 3 to 10 diagnosed with autism had an IQ below 70, the typical cutoff for intellectual disability.)By 2025, when about 3 percent of children are being diagnosed with autism, about one in four of those diagnosed are considered to have high-support needs autism, those with most severe manifestation of the condition. That would equal about 0.8 percent of all US children — which would be a fairly marginal increase from autism rates 30 years ago. Or look at it another way: In 2000, as many as 60 percent of the people being diagnosed with autism had an intellectual disability, one of the best indicators of high-support needs autism. In 2022, that percentage was less than 40 percent.As a recently published CDC report on autism prevalence among young children concluded, the increase in autism rates can largely be accounted for by stronger surveillance and more awareness among providers and parents, rather than a novel toxin or some other external factor driving an increase in cases.Other known risk factors — like more people now having babies later in their life, given that parental age is linked to a higher likelihood of autism — are more likely to be a factor than anything Kennedy is pointing at, experts say. “It’s very clear it’s not going to be one environmental toxin,” said Alison Singer, founder of the Autism Science Foundation and parent of a child with profound autism. “If there were a smoking gun, I think they would have found it.”While Kennedy has fixated on vaccines and environmental influences, scientists have gained more precision in mapping human genetics and identifying the biological mechanisms that appear to be a primary cause of autism. And that not only helps us understand why autism develops, but potentially puts long-elusive therapies within reach. It began with an accident in the 1990s. Steven Scherer, now director of the Center for Applied Genomics at the Hospital for Sick Children in Toronto, began his career in the late 1980s trying to identify the gene that caused cystic fibrosis — in collaboration with Francis Collins, who went on to lead the Human Genome Project that successfully sequenced all of the DNA in the human genome in the early 2000s. Scherer and Collins’s teams focused on chromosome 7, identified as a likely target by the primitive genetic research available at the time, a coincidence that would reorient Scherer’s career just a few years later, putting him on the trail of autism’s genetic roots.After four years, the researchers concluded that one gene within chromosome 7 caused cystic fibrosis. Soon after Scherer helped crack the code on cystic fibrosis in the mid-1990s, two parents from California called him: He was the world’s leading expert on chromosome 7, and recent tests had revealed that their children with autism had a problem within that particular chromosome.That very same week, Scherer says, he read the findings of a study by a group at Oxford University, which had looked at the chromosomes of families with two or more kids with autism. They, too, had identified problems within chromosome 7.“So I said, ‘Okay, we’re going to work on autism,’” Scherer told me. He helped coordinate a global research project, uniting his Canadian lab with the Oxford team and groups in the US to run a database that became the Autism Genome Project, still the world’s largest repository of genetic information of people with autism.They had a starting point — one chromosome — but a given chromosome contains hundreds of genes. And humans have, of course, 45 other chromosomes, any of which conceivably might play a role. So over the years, they collected DNA samples from thousands upon thousands of people with autism, sequenced their genes, and then searched for patterns. If the same gene is mutated or missing across a high percentage of autistic people, it goes on the list as potentially associated with the condition. Scientists discovered that autism has not one genetic factor, but many — further evidence that this is a condition of complex origin, in which multiple variables likely play a role in its development, rather than one caused by a single genetic error like sickle-cell anemia.Here is one way to think about how far we have come: Joseph Buxbaum, the director of the Seaver Autism Center for Research and Treatment at the Icahn School of Medicine at Mount Sinai in New York, entered autism genetics research 35 years ago. He recalls scientists being hopeful that they might identify a half dozen or so genes linked to autism.They have now found 500 genes — and Buxbaum told me he believed they might find a thousand before they are through. These genetic factors continue to prove their value in predicting the onset of autism: Scherer pointed to one recent study in which the researchers identified people who all shared a mutation in the SHANK3 gene, one of the first to be associated with autism, but who were otherwise unalike: They were not related and came from different demographic backgrounds. Nevertheless, they had all been diagnosed with autism.Researchers analyze the brain activity of a 14-year-old boy with autism as part of a University of California San Francisco study that involves intensive brain imaging of kids and their parents who have a rare chromosome disruption connected to autism. The study, the Simons Variation in Individuals Project, is a genetics-first approach to studying autism spectrum and related neurodevelopmental disorders. Michael Macor/San Francisco Chronicle via The Associated PressPrecisely how much genetics contributes to the development of autism remains the subject of ongoing study. By analyzing millions of children with autism and their parents for patterns in diagnoses, multiple studies have attributed about 80 percent of a person’s risk of developing autism to their inherited genetic factors. But of course 80 percent is not 100 percent. We don’t yet have the full picture of how or why autism develops. Among identical twins, for example, studies have found that in most cases, if one twin has high-support needs autism, the other does as well, affirming the genetic effect. But there are consistently a small minority of cases — 5 and 10 percent of twin pairs, Scherer told me — in which one twin has relatively low-support needs while the one requires a a high degree of support for their autism.Kennedy is not wholly incorrect to look at environmental factors — researchers theorize that autism may be the result of a complex interaction between a person’s genetics and something they experience in utero. Scientists in autism research are exploring the possible influence when, for example, a person’s mother develops maternal diabetes, high blood sugar that persists throughout pregnancy. And yet even if these other factors do play some role, the researchers I spoke to agree that genetics is, based on what we know now, far and away the most important driver.“We need to figure out how other types of genetics and also environmental factors affect autism’s development,” Scherer said. “There could be environmental changes…involved in some people, but it’s going to be based on their genetics and the pathways that lead them to be susceptible.”While the precise contours of Health Department’s new autism research project is still taking shape, Kennedy has that researchers at the National Institutes of Health will collect data from federal programs such as Medicare and Medicaid and somehow use that information to identify possible environmental exposures that lead to autism. He initially pledged results by September, a timeline that, as outside experts pointed out, may be too fast to allow for a thorough and thoughtful review of the research literature. Kennedy has since backed off on that deadline, promising some initial findings in the fall but with more to come next year.RFK Jr.’s autism commission research risks the accessibility of groundbreaking autism treatmentsIf Kennedy were serious about moving autism science forward, he would be talking more about genetics, not dismissing them. That’s because genetics is where all of the exciting drug development is currently happening.A biotech firm called Jaguar Gene Therapy has received FDA approval to conduct the first clinical trial of a gene therapy for autism, focused on SHANK3. The treatment, developed in part by one of Buxbaum’s colleagues, is a one-time injection that would replace a mutated or missing SHANK3 gene with a functional one. The hope is that the therapy would improve speech and other symptoms among people with high-needs autism who have also been diagnosed with a rare chromosomal deletion disorder called Phelan-McDermid syndrome; many people with this condition also have Autism spectrum disorder.The trial will begin this year with a few infant patients, 2 years old and younger, who have been diagnosed with autism. Jaguar eventually aims to test the therapy on adults over 18 with autism in the future. Patients are supposed to start enrolling this year in the trial, which is focused on first establishing the treatment’s safety; if it proves safe, another round of trials would start to rigorously evaluate its effectiveness.“This is the stuff that three or four years ago sounded like science fiction,” Singer said. “The conversation has really changed from Is this possible? to What are the best methods to do it? And that’s based on genetics.”Researchers at Mount Sinai have also experimented with delivering lithium to patients and seeing if it improves their SHANK3 function. Other gene therapies targeting other genes are in earlier stages of development. Some investigators are experimenting with CRISPR technology, the revolutionary new platform for gene editing, to target the problematic genes that correspond to the onset of autism.But these scientists fear that their work could be slowed by Kennedy’s insistence on hunting for environmental toxins, if federal dollars are instead shifted into his new project. They are already trying to subsist amid deep budget cuts across the many funding streams that support the institutions where they work. “Now we have this massive disruption where instead of doing really key experiments, people are worrying about paying their bills and laying off their staff and things,” Scherer said. “It’s horrible.” For the families of people with high-needs autism, Kennedy’s crusade has stirred conflicting emotions. Alison Singer, the leader of the Autism Science Foundation, is also the parent of a child with profound autism. When I spoke with her, I was struck by the bind that Kennedy’s rhetoric has put people like her and her family in. Singer told me profound autism has not received enough federal support in the past, as more emphasis was placed on individuals who have low support needs included in the expanding definitions of the disorder, and so she appreciates Kennedy giving voice to those families. She believes that he is sincerely empathetic toward their predicament and their feeling that the mainstream discussion about autism has for too long ignored their experiences in favor of patients with lower support needs. But she worries that his obsession with environmental factors will stymie the research that could yield breakthroughs for people like her child.“He feels for those families and genuinely wants to help them,” Singer said. “The problem is he is a data denier. You can’t be so entrenched in your beliefs that you can’t see the data right in front of you. That’s not science.”See More:
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • DexCare AI Platform Tackles Health Care Access, Cost Crisis

    Care management platform DexCare is applying artificial intelligencein an innovative way to fix health care access issues. Its AI-driven platform helps health systems overcome rising costs, limited capacity, and fragmented digital infrastructure.
    As Americans face worsening health outcomes and soaring costs, DexCare Co-founder Derek Streat sees opportunity in the crisis and is leading a push to apply AI and machine learningto health care’s toughest operational challenges — from overcrowded emergency rooms to disconnected digital systems.
    No stranger to using AI to solve health care issues, Streat is guiding DexCare as it leverages AI and ML to confront the industry’s most persistent pain points: spiraling costs, resource constraints, and the impossible task of doing more with less. Its platform helps liberate data silos to orchestrate care better and deliver a “shoppable” experience.
    The combination unlocks patient access to care and optimizes health care resources. DexCare enables health systems to see 40% more patients with existing clinical resources.
    Streat readily admits that some advanced companies use AI to enhance clinical and medical research. However, advanced AI tools such as conversational generative AI are less common in the health care access space. DexCare addresses that service gap.
    “Access is broken, and our fundamental belief is that there haven’t been enough solutions to balance patient, provider, and health system needs and objectives,” he told TechNewsWorld.
    Improving Patient Access With Predictive AI
    Achieving that balance depends on the underlying information drawn from health care providers’ neural networks, ML models, classification systems, and advancements in generative AI. These elements build on one another.
    Derek Streat, Co-founder of DexCare
    With the goal of a better customer experience, DexCare’s platform helps care providers optimize the algorithm so everyone benefits. The focus is on ensuring patients get what matches their intent and motivations while respecting the providers’ capacity and needs, explained Streat.
    He describes the platform’s technology as a foundational pyramid based on data that AI optimizes and manages. Those components ensure high-fidelity outcome predictions for recommended care options.
    “It could be a doctor in a clinic or a nurse in a virtual care system,” he suggested. “I’m not talking about clinical outcomes. I’m talking about what you’re looking for.”
    Ultimately, that managed balance will not burn out all your providers. It will make this a sustainable business line for the health system.
    From Providence Prototype to Scalable Solution
    Streat defined DexCare as an access optimization company. He shared that the platform originated from a ground-floor build within the Providence Health System.
    After four years of development and validation, he launched the technology for broader use across the health care industry.
    “It’s well tested and very effective in what it does. That allowed us to have something scalable across organizations as well. Our expansion makes health care more discoverable to consumers and patients and more sustainable for medical providers and the health systems we serve,” he said.
    Digital Marquee for Consumers, Service Management for Providers
    DexCare’s AI works on multiple levels. It provides health care system or medical facility services as a contact center. That part attracts and curates audiences, consumers, and patients. Its digital assets could be websites, landing pages, or screening kiosks.
    Another part of the platform intelligently navigates patients to the safest and best care option. This process engages the accumulated data and automatically allocates the health system’s resources.

    “It manages schedules and available staff and facilities and automatically allocates them when and where they can be most productively employed,” explained Streat.
    The platform excels at load balancing. It uses AI to rationalize all those components. The decision engine uses AI to ensure that the selected resources and needed services match so the medical treatment can be done most efficiently and effectively to accommodate the patient and the organization.
    How DexCare Integrates With CRM Platforms
    According to Streat, DexCare is not customer relationship management software. Instead, the platform is a tie-in that infuses its AI tools and data services that blend with other platforms such as Salesforce and Oracle.
    “We make it as flexible as we can. It is pretty scalable to the point where now we can touch about 20% of the U.S. population through our health system partners,” he offered.
    Patients do not realize they are interacting with the DexCare-powered experience console under the brands Kaiser, Providence, and SSM Health, some of the DexCare platform’s health systems users. The platform is flexible and adapts to the needs of various health agencies.
    For instance, fulfillment technologies book appointments and supply synchronous virtual solutions.
    “Whatever the modality or setting is, we can either connect with whatever you’re using as a health system, or you can use your own underlying pieces as well,” said Streat.
    He noted that the intelligent data acquisition built into the DexCare platform accesses the electronic medical record, which includes patients’ demographics, medical history, diagnoses, medications, allergies, immunization records, lab results, and treatment plans.
    “The application programming interfacegives us real-time availability, allows us to predict a certain provider’s capacity, and maintains EMR as a source of truth,” said Streat.
    AI’s Long-Term Role in Health Care Access
    Health care management by conversational generative AI provides insights into where organizations struggle, need to adjust their operations, or reassign staff to manage patient flow. That all takes place on the platform’s back end.
    According to Streat, the front-end value proposition is pretty simple. It helps get 20% to 30% more patients into the health system. Organizations generate nine times the initial visit value in downstream revenue for additional services, Streat said.
    He assured that the other part of the value proposition is a lower marginal cost of delivering each visit. That results from matching resources with patients in a way that allows balancing the load across the organization’s network.

    “That depends on the specific use case, but we find up to a 40% additional capacity within the health system without hiring additional resources,” he said.
    How? That is where the underlying AI data comes into play. It helps practitioners make more informed decisions about which patients should be matched with which providers.
    “Not everybody needs to see an expensive doctor in a clinic,” Streat contended. “Sometimes, a nurse in a virtual visit or educational information will be just fine.”
    Despite all the financial metrics, patients want medical treatment and to move on, which is really what the game is here, he surmised.
    Why Generative AI Lags in Health Care
    Streat lamented the rapidly developing sophistication of generative AI, which includes conversational interfaces, analytical capability, and predictive mastery. These technologies are being applied throughout other industries and businesses, but are not yet widely adopted in health care systems.
    He indicated that part of that lag is that health care access needs are different and not as suited for conversational AI solutions hastily layered onto legacy systems. Ultimately, changing health care requires delivering things at scale.
    “Within a health system, its infrastructure, and the plumbing required to respect the systems of records, it’s just a different world,” he said.
    Streat sees AI making it possible for us to move away from searching through a long list of doctors online to booking through a robot operator with a pleasant accent.
    “We will focus on the back-end intelligence and continue to apply it to these lower-friction ways for people to interact with the health system. That’s incredibly exciting to me,” he concluded.
    #dexcare #platform #tackles #health #care
    DexCare AI Platform Tackles Health Care Access, Cost Crisis
    Care management platform DexCare is applying artificial intelligencein an innovative way to fix health care access issues. Its AI-driven platform helps health systems overcome rising costs, limited capacity, and fragmented digital infrastructure. As Americans face worsening health outcomes and soaring costs, DexCare Co-founder Derek Streat sees opportunity in the crisis and is leading a push to apply AI and machine learningto health care’s toughest operational challenges — from overcrowded emergency rooms to disconnected digital systems. No stranger to using AI to solve health care issues, Streat is guiding DexCare as it leverages AI and ML to confront the industry’s most persistent pain points: spiraling costs, resource constraints, and the impossible task of doing more with less. Its platform helps liberate data silos to orchestrate care better and deliver a “shoppable” experience. The combination unlocks patient access to care and optimizes health care resources. DexCare enables health systems to see 40% more patients with existing clinical resources. Streat readily admits that some advanced companies use AI to enhance clinical and medical research. However, advanced AI tools such as conversational generative AI are less common in the health care access space. DexCare addresses that service gap. “Access is broken, and our fundamental belief is that there haven’t been enough solutions to balance patient, provider, and health system needs and objectives,” he told TechNewsWorld. Improving Patient Access With Predictive AI Achieving that balance depends on the underlying information drawn from health care providers’ neural networks, ML models, classification systems, and advancements in generative AI. These elements build on one another. Derek Streat, Co-founder of DexCare With the goal of a better customer experience, DexCare’s platform helps care providers optimize the algorithm so everyone benefits. The focus is on ensuring patients get what matches their intent and motivations while respecting the providers’ capacity and needs, explained Streat. He describes the platform’s technology as a foundational pyramid based on data that AI optimizes and manages. Those components ensure high-fidelity outcome predictions for recommended care options. “It could be a doctor in a clinic or a nurse in a virtual care system,” he suggested. “I’m not talking about clinical outcomes. I’m talking about what you’re looking for.” Ultimately, that managed balance will not burn out all your providers. It will make this a sustainable business line for the health system. From Providence Prototype to Scalable Solution Streat defined DexCare as an access optimization company. He shared that the platform originated from a ground-floor build within the Providence Health System. After four years of development and validation, he launched the technology for broader use across the health care industry. “It’s well tested and very effective in what it does. That allowed us to have something scalable across organizations as well. Our expansion makes health care more discoverable to consumers and patients and more sustainable for medical providers and the health systems we serve,” he said. Digital Marquee for Consumers, Service Management for Providers DexCare’s AI works on multiple levels. It provides health care system or medical facility services as a contact center. That part attracts and curates audiences, consumers, and patients. Its digital assets could be websites, landing pages, or screening kiosks. Another part of the platform intelligently navigates patients to the safest and best care option. This process engages the accumulated data and automatically allocates the health system’s resources. “It manages schedules and available staff and facilities and automatically allocates them when and where they can be most productively employed,” explained Streat. The platform excels at load balancing. It uses AI to rationalize all those components. The decision engine uses AI to ensure that the selected resources and needed services match so the medical treatment can be done most efficiently and effectively to accommodate the patient and the organization. How DexCare Integrates With CRM Platforms According to Streat, DexCare is not customer relationship management software. Instead, the platform is a tie-in that infuses its AI tools and data services that blend with other platforms such as Salesforce and Oracle. “We make it as flexible as we can. It is pretty scalable to the point where now we can touch about 20% of the U.S. population through our health system partners,” he offered. Patients do not realize they are interacting with the DexCare-powered experience console under the brands Kaiser, Providence, and SSM Health, some of the DexCare platform’s health systems users. The platform is flexible and adapts to the needs of various health agencies. For instance, fulfillment technologies book appointments and supply synchronous virtual solutions. “Whatever the modality or setting is, we can either connect with whatever you’re using as a health system, or you can use your own underlying pieces as well,” said Streat. He noted that the intelligent data acquisition built into the DexCare platform accesses the electronic medical record, which includes patients’ demographics, medical history, diagnoses, medications, allergies, immunization records, lab results, and treatment plans. “The application programming interfacegives us real-time availability, allows us to predict a certain provider’s capacity, and maintains EMR as a source of truth,” said Streat. AI’s Long-Term Role in Health Care Access Health care management by conversational generative AI provides insights into where organizations struggle, need to adjust their operations, or reassign staff to manage patient flow. That all takes place on the platform’s back end. According to Streat, the front-end value proposition is pretty simple. It helps get 20% to 30% more patients into the health system. Organizations generate nine times the initial visit value in downstream revenue for additional services, Streat said. He assured that the other part of the value proposition is a lower marginal cost of delivering each visit. That results from matching resources with patients in a way that allows balancing the load across the organization’s network. “That depends on the specific use case, but we find up to a 40% additional capacity within the health system without hiring additional resources,” he said. How? That is where the underlying AI data comes into play. It helps practitioners make more informed decisions about which patients should be matched with which providers. “Not everybody needs to see an expensive doctor in a clinic,” Streat contended. “Sometimes, a nurse in a virtual visit or educational information will be just fine.” Despite all the financial metrics, patients want medical treatment and to move on, which is really what the game is here, he surmised. Why Generative AI Lags in Health Care Streat lamented the rapidly developing sophistication of generative AI, which includes conversational interfaces, analytical capability, and predictive mastery. These technologies are being applied throughout other industries and businesses, but are not yet widely adopted in health care systems. He indicated that part of that lag is that health care access needs are different and not as suited for conversational AI solutions hastily layered onto legacy systems. Ultimately, changing health care requires delivering things at scale. “Within a health system, its infrastructure, and the plumbing required to respect the systems of records, it’s just a different world,” he said. Streat sees AI making it possible for us to move away from searching through a long list of doctors online to booking through a robot operator with a pleasant accent. “We will focus on the back-end intelligence and continue to apply it to these lower-friction ways for people to interact with the health system. That’s incredibly exciting to me,” he concluded. #dexcare #platform #tackles #health #care
    DexCare AI Platform Tackles Health Care Access, Cost Crisis
    www.technewsworld.com
    Care management platform DexCare is applying artificial intelligence (AI) in an innovative way to fix health care access issues. Its AI-driven platform helps health systems overcome rising costs, limited capacity, and fragmented digital infrastructure. As Americans face worsening health outcomes and soaring costs, DexCare Co-founder Derek Streat sees opportunity in the crisis and is leading a push to apply AI and machine learning (ML) to health care’s toughest operational challenges — from overcrowded emergency rooms to disconnected digital systems. No stranger to using AI to solve health care issues, Streat is guiding DexCare as it leverages AI and ML to confront the industry’s most persistent pain points: spiraling costs, resource constraints, and the impossible task of doing more with less. Its platform helps liberate data silos to orchestrate care better and deliver a “shoppable” experience. The combination unlocks patient access to care and optimizes health care resources. DexCare enables health systems to see 40% more patients with existing clinical resources. Streat readily admits that some advanced companies use AI to enhance clinical and medical research. However, advanced AI tools such as conversational generative AI are less common in the health care access space. DexCare addresses that service gap. “Access is broken, and our fundamental belief is that there haven’t been enough solutions to balance patient, provider, and health system needs and objectives,” he told TechNewsWorld. Improving Patient Access With Predictive AI Achieving that balance depends on the underlying information drawn from health care providers’ neural networks, ML models, classification systems, and advancements in generative AI. These elements build on one another. Derek Streat, Co-founder of DexCare With the goal of a better customer experience (CX), DexCare’s platform helps care providers optimize the algorithm so everyone benefits. The focus is on ensuring patients get what matches their intent and motivations while respecting the providers’ capacity and needs, explained Streat. He describes the platform’s technology as a foundational pyramid based on data that AI optimizes and manages. Those components ensure high-fidelity outcome predictions for recommended care options. “It could be a doctor in a clinic or a nurse in a virtual care system,” he suggested. “I’m not talking about clinical outcomes. I’m talking about what you’re looking for.” Ultimately, that managed balance will not burn out all your providers. It will make this a sustainable business line for the health system. From Providence Prototype to Scalable Solution Streat defined DexCare as an access optimization company. He shared that the platform originated from a ground-floor build within the Providence Health System. After four years of development and validation, he launched the technology for broader use across the health care industry. “It’s well tested and very effective in what it does. That allowed us to have something scalable across organizations as well. Our expansion makes health care more discoverable to consumers and patients and more sustainable for medical providers and the health systems we serve,” he said. Digital Marquee for Consumers, Service Management for Providers DexCare’s AI works on multiple levels. It provides health care system or medical facility services as a contact center. That part attracts and curates audiences, consumers, and patients. Its digital assets could be websites, landing pages, or screening kiosks. Another part of the platform intelligently navigates patients to the safest and best care option. This process engages the accumulated data and automatically allocates the health system’s resources. “It manages schedules and available staff and facilities and automatically allocates them when and where they can be most productively employed,” explained Streat. The platform excels at load balancing. It uses AI to rationalize all those components. The decision engine uses AI to ensure that the selected resources and needed services match so the medical treatment can be done most efficiently and effectively to accommodate the patient and the organization. How DexCare Integrates With CRM Platforms According to Streat, DexCare is not customer relationship management software. Instead, the platform is a tie-in that infuses its AI tools and data services that blend with other platforms such as Salesforce and Oracle. “We make it as flexible as we can. It is pretty scalable to the point where now we can touch about 20% of the U.S. population through our health system partners,” he offered. Patients do not realize they are interacting with the DexCare-powered experience console under the brands Kaiser, Providence, and SSM Health, some of the DexCare platform’s health systems users. The platform is flexible and adapts to the needs of various health agencies. For instance, fulfillment technologies book appointments and supply synchronous virtual solutions. “Whatever the modality or setting is, we can either connect with whatever you’re using as a health system, or you can use your own underlying pieces as well,” said Streat. He noted that the intelligent data acquisition built into the DexCare platform accesses the electronic medical record (EMR), which includes patients’ demographics, medical history, diagnoses, medications, allergies, immunization records, lab results, and treatment plans. “The application programming interface [API] gives us real-time availability, allows us to predict a certain provider’s capacity, and maintains EMR as a source of truth,” said Streat. AI’s Long-Term Role in Health Care Access Health care management by conversational generative AI provides insights into where organizations struggle, need to adjust their operations, or reassign staff to manage patient flow. That all takes place on the platform’s back end. According to Streat, the front-end value proposition is pretty simple. It helps get 20% to 30% more patients into the health system. Organizations generate nine times the initial visit value in downstream revenue for additional services, Streat said. He assured that the other part of the value proposition is a lower marginal cost of delivering each visit. That results from matching resources with patients in a way that allows balancing the load across the organization’s network. “That depends on the specific use case, but we find up to a 40% additional capacity within the health system without hiring additional resources,” he said. How? That is where the underlying AI data comes into play. It helps practitioners make more informed decisions about which patients should be matched with which providers. “Not everybody needs to see an expensive doctor in a clinic,” Streat contended. “Sometimes, a nurse in a virtual visit or educational information will be just fine.” Despite all the financial metrics, patients want medical treatment and to move on, which is really what the game is here, he surmised. Why Generative AI Lags in Health Care Streat lamented the rapidly developing sophistication of generative AI, which includes conversational interfaces, analytical capability, and predictive mastery. These technologies are being applied throughout other industries and businesses, but are not yet widely adopted in health care systems. He indicated that part of that lag is that health care access needs are different and not as suited for conversational AI solutions hastily layered onto legacy systems. Ultimately, changing health care requires delivering things at scale. “Within a health system, its infrastructure, and the plumbing required to respect the systems of records, it’s just a different world,” he said. Streat sees AI making it possible for us to move away from searching through a long list of doctors online to booking through a robot operator with a pleasant accent. “We will focus on the back-end intelligence and continue to apply it to these lower-friction ways for people to interact with the health system. That’s incredibly exciting to me,” he concluded.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • Why Women With Type 2 Diabetes Are Diagnosed Later Than Men

    Researchers are trying to understand more about the biological and social differences that contribute to later diabetes diagnoses and worse outcomes in women.
    #why #women #with #type #diabetes
    Why Women With Type 2 Diabetes Are Diagnosed Later Than Men
    Researchers are trying to understand more about the biological and social differences that contribute to later diabetes diagnoses and worse outcomes in women. #why #women #with #type #diabetes
    Why Women With Type 2 Diabetes Are Diagnosed Later Than Men
    www.wired.com
    Researchers are trying to understand more about the biological and social differences that contribute to later diabetes diagnoses and worse outcomes in women.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • Breakthrough Alzheimer’s Blood Test Explained By Neurologists

    The FDA recently cleared the Lumipulse blood test for early diagnosis of Alzheimer's disease in ... More people 55 and over with memory loss. The noninvasive Lumipulse blood test measures the levels of two proteins—pTau 217 and β-Amyloid 1-42—in plasma and calculates the ratio between them. This ratio is correlated with the presence or absence of amyloid plaques, a hallmark of Alzheimer's disease, in the brain.getty

    Whether you’re noticing changes in your memory that are affecting your daily life, caring for a loved one recently diagnosed with dementia, evaluating a patient as a physician, or simply worried about someone close to you, the recent FDA clearance of the Lumipulse blood test for the early diagnosis of Alzheimer’s disease is a significant development that you should be aware of. Here’s what you need to know about this Breakthrough Alzheimer’s blood test.

    The Lumipulse G pTau217/β-Amyloid 1-42 Plasma Ratio test is designed for the early detection of amyloid plaques associated with Alzheimer’s disease in adults aged 55 years and older who are showing signs and symptoms of the condition. If you’ve witnessed a loved one gradually lose their memories due to the impact of amyloid plaques in their brain, you understand how important a test like this can be.

    The Lumipulse test measures the levels of two proteins—pTau 217 and β-Amyloid 1-42—in plasma and calculates the ratio between them. This ratio is correlated with the presence or absence of amyloid plaques in the brain, potentially reducing the need for more invasive procedures like PET scans or spinal fluid analysis.

    Benefits of testing with Lumipulse
    Dr. Phillipe Douyon, a neurologist and author of “7 Things You Should Be Doing to Minimize Your Risk of Dementia,” notes that the Alzheimer’s Association has reported that 50-70% of symptomatic patients in community settings are inaccurately diagnosed with Alzheimer’s disease. In specialized memory clinics, this misdiagnosis rate drops to 25-30%. “Having a test that provides early and accurate insights into the cause of someone’s dementia could be a massive game changer,” says Dr. Douyon.
    Alzheimer's disease. Neurodegeneration. Cross section of normal and Alzheimer brain, with Atrophy of ... More the cerebral cortex, Enlarged ventricles and Hippocampus. Close-up of neurons with Neurofibrillary tangles and Amyloid plaques. Vector illustrationgetty
    This new test follows the recent FDA approval of two medications, lecanemab and donanemab, which are highly effective in removing amyloid from the brain. Clinical trials have shown that these treatments can slow the progression of dementia. Currently, to qualify for these medications, patients must undergo expensive examinations, such as a brain amyloid PET scan or a lumbar puncture to analyze their spinal fluid. Many patients, however, do not have access to PET imaging or specialist care.

    “A blood test makes diagnostic procedures more accessible and benefits underserved populations,” says Dr. Haythum Tayeb, a neurologist at WMCHealth. “It also enables earlier and more personalized care planning, even before formal treatment begins. This empowers patients and their families to make informed decisions sooner,” Dr. Tayeb adds.
    Who Should Be Tested With Lumipulse
    While this blood test may improve access to care for patients from communities lacking neurology and other specialty services, it is recommended to use it only for individuals experiencing memory problems, rather than for those who are asymptomatic. “Given that there is no specific treatment indicated for asymptomatic persons, there is a risk of introducing psychological harm at this stage,” warns Dr. James Noble who is Professor of Neurology at Columbia University Irving Medical Center and author of Navigating Life With Dementia. “Healthy approaches to lifestyle will remain central in adulthood whether or not someone has a positive test, and that advice will not really change,” adds Dr. Noble.
    Living a healthy lifestyle can significantly enhance brain health, regardless of whether a person has an abnormal accumulation of amyloid in their brain. Key factors include regular exercise, following a healthy diet such as the Mediterranean diet, getting adequate sleep, engaging in social and cognitive activities. These practices are all essential for maintaining cognitive function. Additionally, taking steps to protect your hearing may help reduce the risk of developing dementia.To reduce your risk of dementia, you can do regular exercise, consume a healthy diet such as the ... More Mediterranean diet, get adequate sleep, and engage regularly in social and cognitive activities.getty
    Anyone experiencing memory loss should consult their medical provider for an evaluation. The provider can conduct basic cognitive testing and determine if a referral to a specialist is necessary. If the individual meets the criteria for testing, the lumipulse blood test should also be considered.
    Future Of Alzheimer’s Testing
    “Looking across the wide landscape of medicine, many other conditions benefit from early detection, diagnosis, and treatment. There is no reason to believe that Alzheimer’s disease will be any different” says Dr. Noble. Indeed, screening for diseases like colon cancer, breast cancer, and high blood pressure has significantly extended the average American lifespan. Imagine how much our lives could change if we could screen for Alzheimer’s dementia in the same way. This would be particularly useful for patients at higher risk due to age or family history.
    Providing earlier intervention for Alzheimer’s disease could potentially reduce amyloid buildup in the brain, help preserve memories, and allow individuals to live more independently at home, rather than in nursing homes.
    Another advantage of using a test like the Lumipulse blood test is the ability to inform a patient that their memory loss is not linked to Alzheimer’s disease. While a negative blood test does not entirely rule out an Alzheimer’s diagnosis, it does make it less probable. This could prompt the medical provider to conduct further testing to identify a more accurate cause for the patient’s memory loss. In some instances, the medical provider may conclude that the patient’s memory loss is related to normal aging. This is also important so that patients are not unnecessarily placed on medications that may not help them.
    It is reasonable to anticipate that additional blood-based biomarkers for diagnosing Alzheimer’s disease and other dementias will be available in the future. Perhaps one day, there will be a dementia panel blood test that can be sent off to provide early diagnosis of a wide range of dementias.
    Alzheimer’s blood testing is not only beneficial for individuals, but it also represents a significant advancement for research. Doctors and scientists can more easily identify individuals in the early stages of Alzheimer’s disease, which accelerates clinical trials for new medications. This increased diagnostic accuracy can enhance the effectiveness of Alzheimer’s clinical trials, as it ensures that patients enrolled have more reliable diagnoses. Consequently, new and more effective treatments could be developed and made available more quickly.
    The Lumipulse Alzheimer’s blood test marks a pivotal moment in our approach to this disease. While patients may still need confirmatory testing through brain imaging or spinal fluid analysis, this blood test enables the medical community to adopt a more proactive, precise, and personalized strategy for diagnosing and treating patients with dementia. This simple blood test brings us one step closer to earlier answers, better care, and renewed hope for millions of people facing the uncertainty of dementia.
    #breakthrough #alzheimers #blood #test #explained
    Breakthrough Alzheimer’s Blood Test Explained By Neurologists
    The FDA recently cleared the Lumipulse blood test for early diagnosis of Alzheimer's disease in ... More people 55 and over with memory loss. The noninvasive Lumipulse blood test measures the levels of two proteins—pTau 217 and β-Amyloid 1-42—in plasma and calculates the ratio between them. This ratio is correlated with the presence or absence of amyloid plaques, a hallmark of Alzheimer's disease, in the brain.getty Whether you’re noticing changes in your memory that are affecting your daily life, caring for a loved one recently diagnosed with dementia, evaluating a patient as a physician, or simply worried about someone close to you, the recent FDA clearance of the Lumipulse blood test for the early diagnosis of Alzheimer’s disease is a significant development that you should be aware of. Here’s what you need to know about this Breakthrough Alzheimer’s blood test. The Lumipulse G pTau217/β-Amyloid 1-42 Plasma Ratio test is designed for the early detection of amyloid plaques associated with Alzheimer’s disease in adults aged 55 years and older who are showing signs and symptoms of the condition. If you’ve witnessed a loved one gradually lose their memories due to the impact of amyloid plaques in their brain, you understand how important a test like this can be. The Lumipulse test measures the levels of two proteins—pTau 217 and β-Amyloid 1-42—in plasma and calculates the ratio between them. This ratio is correlated with the presence or absence of amyloid plaques in the brain, potentially reducing the need for more invasive procedures like PET scans or spinal fluid analysis. Benefits of testing with Lumipulse Dr. Phillipe Douyon, a neurologist and author of “7 Things You Should Be Doing to Minimize Your Risk of Dementia,” notes that the Alzheimer’s Association has reported that 50-70% of symptomatic patients in community settings are inaccurately diagnosed with Alzheimer’s disease. In specialized memory clinics, this misdiagnosis rate drops to 25-30%. “Having a test that provides early and accurate insights into the cause of someone’s dementia could be a massive game changer,” says Dr. Douyon. Alzheimer's disease. Neurodegeneration. Cross section of normal and Alzheimer brain, with Atrophy of ... More the cerebral cortex, Enlarged ventricles and Hippocampus. Close-up of neurons with Neurofibrillary tangles and Amyloid plaques. Vector illustrationgetty This new test follows the recent FDA approval of two medications, lecanemab and donanemab, which are highly effective in removing amyloid from the brain. Clinical trials have shown that these treatments can slow the progression of dementia. Currently, to qualify for these medications, patients must undergo expensive examinations, such as a brain amyloid PET scan or a lumbar puncture to analyze their spinal fluid. Many patients, however, do not have access to PET imaging or specialist care. “A blood test makes diagnostic procedures more accessible and benefits underserved populations,” says Dr. Haythum Tayeb, a neurologist at WMCHealth. “It also enables earlier and more personalized care planning, even before formal treatment begins. This empowers patients and their families to make informed decisions sooner,” Dr. Tayeb adds. Who Should Be Tested With Lumipulse While this blood test may improve access to care for patients from communities lacking neurology and other specialty services, it is recommended to use it only for individuals experiencing memory problems, rather than for those who are asymptomatic. “Given that there is no specific treatment indicated for asymptomatic persons, there is a risk of introducing psychological harm at this stage,” warns Dr. James Noble who is Professor of Neurology at Columbia University Irving Medical Center and author of Navigating Life With Dementia. “Healthy approaches to lifestyle will remain central in adulthood whether or not someone has a positive test, and that advice will not really change,” adds Dr. Noble. Living a healthy lifestyle can significantly enhance brain health, regardless of whether a person has an abnormal accumulation of amyloid in their brain. Key factors include regular exercise, following a healthy diet such as the Mediterranean diet, getting adequate sleep, engaging in social and cognitive activities. These practices are all essential for maintaining cognitive function. Additionally, taking steps to protect your hearing may help reduce the risk of developing dementia.To reduce your risk of dementia, you can do regular exercise, consume a healthy diet such as the ... More Mediterranean diet, get adequate sleep, and engage regularly in social and cognitive activities.getty Anyone experiencing memory loss should consult their medical provider for an evaluation. The provider can conduct basic cognitive testing and determine if a referral to a specialist is necessary. If the individual meets the criteria for testing, the lumipulse blood test should also be considered. Future Of Alzheimer’s Testing “Looking across the wide landscape of medicine, many other conditions benefit from early detection, diagnosis, and treatment. There is no reason to believe that Alzheimer’s disease will be any different” says Dr. Noble. Indeed, screening for diseases like colon cancer, breast cancer, and high blood pressure has significantly extended the average American lifespan. Imagine how much our lives could change if we could screen for Alzheimer’s dementia in the same way. This would be particularly useful for patients at higher risk due to age or family history. Providing earlier intervention for Alzheimer’s disease could potentially reduce amyloid buildup in the brain, help preserve memories, and allow individuals to live more independently at home, rather than in nursing homes. Another advantage of using a test like the Lumipulse blood test is the ability to inform a patient that their memory loss is not linked to Alzheimer’s disease. While a negative blood test does not entirely rule out an Alzheimer’s diagnosis, it does make it less probable. This could prompt the medical provider to conduct further testing to identify a more accurate cause for the patient’s memory loss. In some instances, the medical provider may conclude that the patient’s memory loss is related to normal aging. This is also important so that patients are not unnecessarily placed on medications that may not help them. It is reasonable to anticipate that additional blood-based biomarkers for diagnosing Alzheimer’s disease and other dementias will be available in the future. Perhaps one day, there will be a dementia panel blood test that can be sent off to provide early diagnosis of a wide range of dementias. Alzheimer’s blood testing is not only beneficial for individuals, but it also represents a significant advancement for research. Doctors and scientists can more easily identify individuals in the early stages of Alzheimer’s disease, which accelerates clinical trials for new medications. This increased diagnostic accuracy can enhance the effectiveness of Alzheimer’s clinical trials, as it ensures that patients enrolled have more reliable diagnoses. Consequently, new and more effective treatments could be developed and made available more quickly. The Lumipulse Alzheimer’s blood test marks a pivotal moment in our approach to this disease. While patients may still need confirmatory testing through brain imaging or spinal fluid analysis, this blood test enables the medical community to adopt a more proactive, precise, and personalized strategy for diagnosing and treating patients with dementia. This simple blood test brings us one step closer to earlier answers, better care, and renewed hope for millions of people facing the uncertainty of dementia. #breakthrough #alzheimers #blood #test #explained
    Breakthrough Alzheimer’s Blood Test Explained By Neurologists
    www.forbes.com
    The FDA recently cleared the Lumipulse blood test for early diagnosis of Alzheimer's disease in ... More people 55 and over with memory loss. The noninvasive Lumipulse blood test measures the levels of two proteins—pTau 217 and β-Amyloid 1-42—in plasma and calculates the ratio between them. This ratio is correlated with the presence or absence of amyloid plaques, a hallmark of Alzheimer's disease, in the brain.getty Whether you’re noticing changes in your memory that are affecting your daily life, caring for a loved one recently diagnosed with dementia, evaluating a patient as a physician, or simply worried about someone close to you, the recent FDA clearance of the Lumipulse blood test for the early diagnosis of Alzheimer’s disease is a significant development that you should be aware of. Here’s what you need to know about this Breakthrough Alzheimer’s blood test. The Lumipulse G pTau217/β-Amyloid 1-42 Plasma Ratio test is designed for the early detection of amyloid plaques associated with Alzheimer’s disease in adults aged 55 years and older who are showing signs and symptoms of the condition. If you’ve witnessed a loved one gradually lose their memories due to the impact of amyloid plaques in their brain, you understand how important a test like this can be. The Lumipulse test measures the levels of two proteins—pTau 217 and β-Amyloid 1-42—in plasma and calculates the ratio between them. This ratio is correlated with the presence or absence of amyloid plaques in the brain, potentially reducing the need for more invasive procedures like PET scans or spinal fluid analysis. Benefits of testing with Lumipulse Dr. Phillipe Douyon, a neurologist and author of “7 Things You Should Be Doing to Minimize Your Risk of Dementia,” notes that the Alzheimer’s Association has reported that 50-70% of symptomatic patients in community settings are inaccurately diagnosed with Alzheimer’s disease. In specialized memory clinics, this misdiagnosis rate drops to 25-30%. “Having a test that provides early and accurate insights into the cause of someone’s dementia could be a massive game changer,” says Dr. Douyon. Alzheimer's disease. Neurodegeneration. Cross section of normal and Alzheimer brain, with Atrophy of ... More the cerebral cortex, Enlarged ventricles and Hippocampus. Close-up of neurons with Neurofibrillary tangles and Amyloid plaques. Vector illustrationgetty This new test follows the recent FDA approval of two medications, lecanemab and donanemab, which are highly effective in removing amyloid from the brain. Clinical trials have shown that these treatments can slow the progression of dementia. Currently, to qualify for these medications, patients must undergo expensive examinations, such as a brain amyloid PET scan or a lumbar puncture to analyze their spinal fluid. Many patients, however, do not have access to PET imaging or specialist care. “A blood test makes diagnostic procedures more accessible and benefits underserved populations,” says Dr. Haythum Tayeb, a neurologist at WMCHealth. “It also enables earlier and more personalized care planning, even before formal treatment begins. This empowers patients and their families to make informed decisions sooner,” Dr. Tayeb adds. Who Should Be Tested With Lumipulse While this blood test may improve access to care for patients from communities lacking neurology and other specialty services, it is recommended to use it only for individuals experiencing memory problems, rather than for those who are asymptomatic. “Given that there is no specific treatment indicated for asymptomatic persons, there is a risk of introducing psychological harm at this stage,” warns Dr. James Noble who is Professor of Neurology at Columbia University Irving Medical Center and author of Navigating Life With Dementia. “Healthy approaches to lifestyle will remain central in adulthood whether or not someone has a positive test, and that advice will not really change,” adds Dr. Noble. Living a healthy lifestyle can significantly enhance brain health, regardless of whether a person has an abnormal accumulation of amyloid in their brain. Key factors include regular exercise, following a healthy diet such as the Mediterranean diet, getting adequate sleep, engaging in social and cognitive activities. These practices are all essential for maintaining cognitive function. Additionally, taking steps to protect your hearing may help reduce the risk of developing dementia.To reduce your risk of dementia, you can do regular exercise, consume a healthy diet such as the ... More Mediterranean diet, get adequate sleep, and engage regularly in social and cognitive activities.getty Anyone experiencing memory loss should consult their medical provider for an evaluation. The provider can conduct basic cognitive testing and determine if a referral to a specialist is necessary. If the individual meets the criteria for testing, the lumipulse blood test should also be considered. Future Of Alzheimer’s Testing “Looking across the wide landscape of medicine, many other conditions benefit from early detection, diagnosis, and treatment. There is no reason to believe that Alzheimer’s disease will be any different” says Dr. Noble. Indeed, screening for diseases like colon cancer, breast cancer, and high blood pressure has significantly extended the average American lifespan. Imagine how much our lives could change if we could screen for Alzheimer’s dementia in the same way. This would be particularly useful for patients at higher risk due to age or family history. Providing earlier intervention for Alzheimer’s disease could potentially reduce amyloid buildup in the brain, help preserve memories, and allow individuals to live more independently at home, rather than in nursing homes. Another advantage of using a test like the Lumipulse blood test is the ability to inform a patient that their memory loss is not linked to Alzheimer’s disease. While a negative blood test does not entirely rule out an Alzheimer’s diagnosis, it does make it less probable. This could prompt the medical provider to conduct further testing to identify a more accurate cause for the patient’s memory loss. In some instances, the medical provider may conclude that the patient’s memory loss is related to normal aging. This is also important so that patients are not unnecessarily placed on medications that may not help them. It is reasonable to anticipate that additional blood-based biomarkers for diagnosing Alzheimer’s disease and other dementias will be available in the future. Perhaps one day, there will be a dementia panel blood test that can be sent off to provide early diagnosis of a wide range of dementias. Alzheimer’s blood testing is not only beneficial for individuals, but it also represents a significant advancement for research. Doctors and scientists can more easily identify individuals in the early stages of Alzheimer’s disease, which accelerates clinical trials for new medications. This increased diagnostic accuracy can enhance the effectiveness of Alzheimer’s clinical trials, as it ensures that patients enrolled have more reliable diagnoses. Consequently, new and more effective treatments could be developed and made available more quickly. The Lumipulse Alzheimer’s blood test marks a pivotal moment in our approach to this disease. While patients may still need confirmatory testing through brain imaging or spinal fluid analysis, this blood test enables the medical community to adopt a more proactive, precise, and personalized strategy for diagnosing and treating patients with dementia. This simple blood test brings us one step closer to earlier answers, better care, and renewed hope for millions of people facing the uncertainty of dementia.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • Rick and Morty team didn’t worry about the lore ‘we owe’ in season 8 — only Rick’s baggage

    Rick and Morty remains a staggering work of chaotic creativity. Previewing a handful of episodes from season 8, which premieres Sunday, May 25 with a Matrix-themed story inspired by phone charger theft, I still had that brain-melty “How do they think of this stuff?” feeling from when the show premiered more than a decade ago. The characters aren’t all the same as they were back in 2013: Morty has an edge from being around the galactic block a few hundred times, and Rick, while still a maniac, seems to carry the weight of cloning his daughter Beth that one time. 

    But the sheer amount of wackadoo sci-fi comedy that creator Dan Harmon, showrunner Scott Marder, and their team of writers pack into each half-hour hasn’t lost the awe. This season, that includes everything from a body-horror spin on the Easter Bunny to a “spiritual sequel”to season 3’s beloved Citadel episode “The Ricklantis Mixup.”

    So where does writing yet another season of Rick and Morty begin? And what does a new season need to accomplish at this point? Polygon talked to Harmon and Marder, who wrote seasons 8, 9, and 10 all in one go, about the tall-order task of reapproaching the Adult Swim series with so much madcap history behind them.

    Polygon: Where do you even start writing a new episode, when your show can zip in any fantastical direction, or go completely ham on its own mythology?

    Scott Marder: You might be surprised that we never start off a season with “What’s the canon we owe?” That’s the heavy lifting, and not necessarily how we want to start a season off. There are always people on staff that are hyper-aware of where we are in that central arc that’s going across the whole series, but it’s like any writers room — people are coming in with ideas they’re excited about. You can just see it on their faces. You can feel their energy and just spit it out, and people just start firing off things they’re excited about. We don’t try to have any rules or any setup. Sometimes there are seasons where we owe something from the previous season. In season 8, we didn’t, and that was luxurious.

    Dan Harmon: I always reference the Dexter season where they tried to save the revelation that a Fight Club was happening for the end, and after the first episode, all of Reddit had decoded it. I marked that moment as sort of “We are now in post-payoff TV.” As TV writers, we have to use what the audience doesn’t have, which is a TV writers room. That isn’t 10 people sitting around planning a funhouse, because they’re not going to plan as good a funhouse as a million people can plan for free by crowdsourcing. 

    But we can mix chocolate with giant machines that people can’t afford and don’t have in their kitchen. We can use resources and things to make something that’s delicious to watch. So that becomes the obligation when we sit down for seasons. We never go, “What’s going to be the big payoff? What’s going to be the big old twist? What are we going to reveal?” I think that that’s a non-starter for the modern audience. You just have to hope that the thing that ends up making headlines is a “How is it still good?” kind of thing — that’s the only narrative you can blow people’s minds with.

    Even if “lore” isn’t the genesis of a new season, Rick & Morty still exists in an interesting middle ground between episodic and serialized storytelling. Do you need the show to have one or the other when you want a season to have impact?

    Harmon: It’s less episodic than Hercules or Xena. It’s not Small Wonder or something where canon would defeat their own purpose. But it is way more episodic than Yellowjackets — I walked in on Codywatching season 2 of, and literally there wasn’t a single line of dialogue that made sense to me, and that was how she liked it. They were all talking about whatever happened in season 1. 

    Referencing The Pitt, I think is the new perfect example of how you can’t shake your cane at serialization. In a post-streaming marketplace, The Pitt represents a new opportunity for old showrunners, new viewers to do things you couldn’t do before, that you can now do with serialization, and issuing the time-slot-driven narrative model. Our show needs to be Doctor Who or Deep Space Nine. It comes from a tradition of, you need to be able to eat one piece of chocolate out of the box, but the characters need to, more so than a Saved by the Bell character, grow and change and have things about them that get revealed over time that don’t then get retconned.

    Marder: Ideally, the show’s evergreen, generally episodic. But we’re keeping an eye on serialized stuff, moments across each season that keep everyone engaged. I know people care about all that stuff. I think all of that combined makes for a perfect Rick and Morty season.

    How reactive is writing a new season of Rick and Morty? Does season 8 feel very 2025 to you, or is the goal timelessness?

    Harmon: The show has seen such a turbulent decade, and one of the cultural things that has happened is, TV is now always being watched by the entire planet. So people often ask “Is there anything that you’re afraid to do or can’t do?” The answer to that is “No.” But then at the same time, I don’t think the show has an edge that it needs to push, or would profit from pushing. It’s almost the opposite, in that the difficult thing is figuring out how to keep Rick from being Flanderized as a character that was a nihilist 10 years ago, where across an epoch of culture and TV, Rick was simply the guy saying, “By the way, God doesn’t exist” and having a cash register “Cha-ching!” from him saying that. 

    How do you keep House from not becoming pathetic on the 10th season of House if House has made people go, “I trust House because he’s such a crab-ass and he doesn’t care about your feelings when he diagnoses you!” I mean, you need to very delicately cultivate a House. So if you do care about the character, and value its outside perspective, it needs to be delicately changed to balance a changing ecosystem. 

    What a weird rambling answer to that question. But yeah, with Rick, it’s now like, “What if you’re kind of post-achievement? What if your nihilism isn’t going to pay the rent, as far as emotional relationships?” It’s not going to blow anyone’s mind, least of all his own. Where does that leave him? A new set of challenges. He’s still cynical, he’s still a nihilist. He’s still self-loathing, and filled with self-damage. Those things are wired into him. And yet he’s also acknowledged that other people are arbitrarily important to him. And so I guess we start there — that’s the only thing we can do to challenge ourselves. 

    Marder: I would say, just yes-anding Harmon, that’s sort of the light arc that runs through the season. Just kind of Rick living in a “retirement state.” What does he do now that this vendetta is over? He’s dealing with the family now, dealing with the Beths. That’s some of the stuff that we touch on lightly through it. 

    Which characters were you excited to see grow this season?

    Marder: I don’t think anyone had an agenda. It just kind of happened that we ended up finding a really neat Beth arc once Beth got split in two. It made her a way more intriguing character. One part of you literally gets to live the road less traveled, and this season really explores whether either of them are leading a happier life. Rick has to deal with being at the root of all that. 

    When we stumble onto something like a Jerry episode, like the Easter, that’s a treat, or Summer and the phone charger. She’s such an awesome character. It’s cool to see how she and Morty are evolving and becoming better at being the sidekick and handling themselves. It was cool watching her become a powerful CEO, then step back into her old life. We are very lucky that we’ve got a strong cast. 

    Are there any concepts in season 8 you’ve tried to get in the show for years and only now found a way?

    Harmon: My frustrating answer to that question is that the answer to that question is one that happens in season 9!I’ve actually been wanting to do in television or in movies forever, and we figured out how to do it. 

    There are definitely things in every episode, but it’s hard to tell which ones. We have a shoebox of “Oh, this idea can’t be done now,” but it’s like a cow’s digestive system. Ideas for seasons just keep getting passed down.

    Marder: There are a few that are magnetic that we can’t crack, and that we kind of leave on the board, hoping that maybe a new guy will come in and see it comedically. I feel like every season, a new person will come in and see that we have “time loop” up on the board, and they’ll crack their knuckles and be like, “I’m going to break the time loop.” And then we all spend three days trying to break “time loop.” Then it goes back on the board, and we’re reminded why we don’t do time loops. 

    Harmon: That is so funny. That is the reality, and it’s funny how mythical it is. It’s like an island on a pre-Columbian map in a ship’s galley, and some new deckhand comes in going, “What’s the Galapagos?” And we’re like, “Yarr, you little piece of shit, sit down and I’ll tell you a tale!” And they’ll either be successfully warned off, or they’ll go, “I’m going to take it.”

    Marder: It’s always like, “I can’t remember why that one made it back on the board… I can’t remember why we couldn’t crack it…” And then three days later, you’re like, “I remember why we couldn’t crack it.” Now an eager young writer is seasoned and grizzled. “It was a mistake to go to the time loop.”
    #rick #morty #team #didnt #worry
    Rick and Morty team didn’t worry about the lore ‘we owe’ in season 8 — only Rick’s baggage
    Rick and Morty remains a staggering work of chaotic creativity. Previewing a handful of episodes from season 8, which premieres Sunday, May 25 with a Matrix-themed story inspired by phone charger theft, I still had that brain-melty “How do they think of this stuff?” feeling from when the show premiered more than a decade ago. The characters aren’t all the same as they were back in 2013: Morty has an edge from being around the galactic block a few hundred times, and Rick, while still a maniac, seems to carry the weight of cloning his daughter Beth that one time.  But the sheer amount of wackadoo sci-fi comedy that creator Dan Harmon, showrunner Scott Marder, and their team of writers pack into each half-hour hasn’t lost the awe. This season, that includes everything from a body-horror spin on the Easter Bunny to a “spiritual sequel”to season 3’s beloved Citadel episode “The Ricklantis Mixup.” So where does writing yet another season of Rick and Morty begin? And what does a new season need to accomplish at this point? Polygon talked to Harmon and Marder, who wrote seasons 8, 9, and 10 all in one go, about the tall-order task of reapproaching the Adult Swim series with so much madcap history behind them. Polygon: Where do you even start writing a new episode, when your show can zip in any fantastical direction, or go completely ham on its own mythology? Scott Marder: You might be surprised that we never start off a season with “What’s the canon we owe?” That’s the heavy lifting, and not necessarily how we want to start a season off. There are always people on staff that are hyper-aware of where we are in that central arc that’s going across the whole series, but it’s like any writers room — people are coming in with ideas they’re excited about. You can just see it on their faces. You can feel their energy and just spit it out, and people just start firing off things they’re excited about. We don’t try to have any rules or any setup. Sometimes there are seasons where we owe something from the previous season. In season 8, we didn’t, and that was luxurious. Dan Harmon: I always reference the Dexter season where they tried to save the revelation that a Fight Club was happening for the end, and after the first episode, all of Reddit had decoded it. I marked that moment as sort of “We are now in post-payoff TV.” As TV writers, we have to use what the audience doesn’t have, which is a TV writers room. That isn’t 10 people sitting around planning a funhouse, because they’re not going to plan as good a funhouse as a million people can plan for free by crowdsourcing.  But we can mix chocolate with giant machines that people can’t afford and don’t have in their kitchen. We can use resources and things to make something that’s delicious to watch. So that becomes the obligation when we sit down for seasons. We never go, “What’s going to be the big payoff? What’s going to be the big old twist? What are we going to reveal?” I think that that’s a non-starter for the modern audience. You just have to hope that the thing that ends up making headlines is a “How is it still good?” kind of thing — that’s the only narrative you can blow people’s minds with. Even if “lore” isn’t the genesis of a new season, Rick & Morty still exists in an interesting middle ground between episodic and serialized storytelling. Do you need the show to have one or the other when you want a season to have impact? Harmon: It’s less episodic than Hercules or Xena. It’s not Small Wonder or something where canon would defeat their own purpose. But it is way more episodic than Yellowjackets — I walked in on Codywatching season 2 of, and literally there wasn’t a single line of dialogue that made sense to me, and that was how she liked it. They were all talking about whatever happened in season 1.  Referencing The Pitt, I think is the new perfect example of how you can’t shake your cane at serialization. In a post-streaming marketplace, The Pitt represents a new opportunity for old showrunners, new viewers to do things you couldn’t do before, that you can now do with serialization, and issuing the time-slot-driven narrative model. Our show needs to be Doctor Who or Deep Space Nine. It comes from a tradition of, you need to be able to eat one piece of chocolate out of the box, but the characters need to, more so than a Saved by the Bell character, grow and change and have things about them that get revealed over time that don’t then get retconned. Marder: Ideally, the show’s evergreen, generally episodic. But we’re keeping an eye on serialized stuff, moments across each season that keep everyone engaged. I know people care about all that stuff. I think all of that combined makes for a perfect Rick and Morty season. How reactive is writing a new season of Rick and Morty? Does season 8 feel very 2025 to you, or is the goal timelessness? Harmon: The show has seen such a turbulent decade, and one of the cultural things that has happened is, TV is now always being watched by the entire planet. So people often ask “Is there anything that you’re afraid to do or can’t do?” The answer to that is “No.” But then at the same time, I don’t think the show has an edge that it needs to push, or would profit from pushing. It’s almost the opposite, in that the difficult thing is figuring out how to keep Rick from being Flanderized as a character that was a nihilist 10 years ago, where across an epoch of culture and TV, Rick was simply the guy saying, “By the way, God doesn’t exist” and having a cash register “Cha-ching!” from him saying that.  How do you keep House from not becoming pathetic on the 10th season of House if House has made people go, “I trust House because he’s such a crab-ass and he doesn’t care about your feelings when he diagnoses you!” I mean, you need to very delicately cultivate a House. So if you do care about the character, and value its outside perspective, it needs to be delicately changed to balance a changing ecosystem.  What a weird rambling answer to that question. But yeah, with Rick, it’s now like, “What if you’re kind of post-achievement? What if your nihilism isn’t going to pay the rent, as far as emotional relationships?” It’s not going to blow anyone’s mind, least of all his own. Where does that leave him? A new set of challenges. He’s still cynical, he’s still a nihilist. He’s still self-loathing, and filled with self-damage. Those things are wired into him. And yet he’s also acknowledged that other people are arbitrarily important to him. And so I guess we start there — that’s the only thing we can do to challenge ourselves.  Marder: I would say, just yes-anding Harmon, that’s sort of the light arc that runs through the season. Just kind of Rick living in a “retirement state.” What does he do now that this vendetta is over? He’s dealing with the family now, dealing with the Beths. That’s some of the stuff that we touch on lightly through it.  Which characters were you excited to see grow this season? Marder: I don’t think anyone had an agenda. It just kind of happened that we ended up finding a really neat Beth arc once Beth got split in two. It made her a way more intriguing character. One part of you literally gets to live the road less traveled, and this season really explores whether either of them are leading a happier life. Rick has to deal with being at the root of all that.  When we stumble onto something like a Jerry episode, like the Easter, that’s a treat, or Summer and the phone charger. She’s such an awesome character. It’s cool to see how she and Morty are evolving and becoming better at being the sidekick and handling themselves. It was cool watching her become a powerful CEO, then step back into her old life. We are very lucky that we’ve got a strong cast.  Are there any concepts in season 8 you’ve tried to get in the show for years and only now found a way? Harmon: My frustrating answer to that question is that the answer to that question is one that happens in season 9!I’ve actually been wanting to do in television or in movies forever, and we figured out how to do it.  There are definitely things in every episode, but it’s hard to tell which ones. We have a shoebox of “Oh, this idea can’t be done now,” but it’s like a cow’s digestive system. Ideas for seasons just keep getting passed down. Marder: There are a few that are magnetic that we can’t crack, and that we kind of leave on the board, hoping that maybe a new guy will come in and see it comedically. I feel like every season, a new person will come in and see that we have “time loop” up on the board, and they’ll crack their knuckles and be like, “I’m going to break the time loop.” And then we all spend three days trying to break “time loop.” Then it goes back on the board, and we’re reminded why we don’t do time loops.  Harmon: That is so funny. That is the reality, and it’s funny how mythical it is. It’s like an island on a pre-Columbian map in a ship’s galley, and some new deckhand comes in going, “What’s the Galapagos?” And we’re like, “Yarr, you little piece of shit, sit down and I’ll tell you a tale!” And they’ll either be successfully warned off, or they’ll go, “I’m going to take it.” Marder: It’s always like, “I can’t remember why that one made it back on the board… I can’t remember why we couldn’t crack it…” And then three days later, you’re like, “I remember why we couldn’t crack it.” Now an eager young writer is seasoned and grizzled. “It was a mistake to go to the time loop.” #rick #morty #team #didnt #worry
    Rick and Morty team didn’t worry about the lore ‘we owe’ in season 8 — only Rick’s baggage
    www.polygon.com
    Rick and Morty remains a staggering work of chaotic creativity. Previewing a handful of episodes from season 8, which premieres Sunday, May 25 with a Matrix-themed story inspired by phone charger theft, I still had that brain-melty “How do they think of this stuff?” feeling from when the show premiered more than a decade ago. The characters aren’t all the same as they were back in 2013 (voice actors aside): Morty has an edge from being around the galactic block a few hundred times, and Rick, while still a maniac, seems to carry the weight of cloning his daughter Beth that one time.  But the sheer amount of wackadoo sci-fi comedy that creator Dan Harmon, showrunner Scott Marder, and their team of writers pack into each half-hour hasn’t lost the awe. This season, that includes everything from a body-horror spin on the Easter Bunny to a “spiritual sequel” (Harmon’s words) to season 3’s beloved Citadel episode “The Ricklantis Mixup.” So where does writing yet another season of Rick and Morty begin? And what does a new season need to accomplish at this point? Polygon talked to Harmon and Marder, who wrote seasons 8, 9, and 10 all in one go, about the tall-order task of reapproaching the Adult Swim series with so much madcap history behind them. Polygon: Where do you even start writing a new episode, when your show can zip in any fantastical direction, or go completely ham on its own mythology? Scott Marder: You might be surprised that we never start off a season with “What’s the canon we owe?” That’s the heavy lifting, and not necessarily how we want to start a season off. There are always people on staff that are hyper-aware of where we are in that central arc that’s going across the whole series, but it’s like any writers room — people are coming in with ideas they’re excited about. You can just see it on their faces. You can feel their energy and just spit it out, and people just start firing off things they’re excited about. We don’t try to have any rules or any setup. Sometimes there are seasons where we owe something from the previous season. In season 8, we didn’t, and that was luxurious. Dan Harmon: I always reference the Dexter season where they tried to save the revelation that a Fight Club was happening for the end, and after the first episode, all of Reddit had decoded it. I marked that moment as sort of “We are now in post-payoff TV.” As TV writers, we have to use what the audience doesn’t have, which is a TV writers room. That isn’t 10 people sitting around planning a funhouse, because they’re not going to plan as good a funhouse as a million people can plan for free by crowdsourcing.  But we can mix chocolate with giant machines that people can’t afford and don’t have in their kitchen. We can use resources and things to make something that’s delicious to watch. So that becomes the obligation when we sit down for seasons. We never go, “What’s going to be the big payoff? What’s going to be the big old twist? What are we going to reveal?” I think that that’s a non-starter for the modern audience. You just have to hope that the thing that ends up making headlines is a “How is it still good?” kind of thing — that’s the only narrative you can blow people’s minds with. Even if “lore” isn’t the genesis of a new season, Rick & Morty still exists in an interesting middle ground between episodic and serialized storytelling. Do you need the show to have one or the other when you want a season to have impact? Harmon: It’s less episodic than Hercules or Xena. It’s not Small Wonder or something where canon would defeat their own purpose. But it is way more episodic than Yellowjackets — I walked in on Cody [Heller, Harmon’s partner] watching season 2 of [Yellowjackets], and literally there wasn’t a single line of dialogue that made sense to me, and that was how she liked it. They were all talking about whatever happened in season 1.  Referencing The Pitt, I think is the new perfect example of how you can’t shake your cane at serialization. In a post-streaming marketplace, The Pitt represents a new opportunity for old showrunners, new viewers to do things you couldn’t do before, that you can now do with serialization, and issuing the time-slot-driven narrative model. Our show needs to be Doctor Who or Deep Space Nine. It comes from a tradition of, you need to be able to eat one piece of chocolate out of the box, but the characters need to, more so than a Saved by the Bell character, grow and change and have things about them that get revealed over time that don’t then get retconned. Marder: Ideally, the show’s evergreen, generally episodic. But we’re keeping an eye on serialized stuff, moments across each season that keep everyone engaged. I know people care about all that stuff. I think all of that combined makes for a perfect Rick and Morty season. How reactive is writing a new season of Rick and Morty? Does season 8 feel very 2025 to you, or is the goal timelessness? Harmon: The show has seen such a turbulent decade, and one of the cultural things that has happened is, TV is now always being watched by the entire planet. So people often ask “Is there anything that you’re afraid to do or can’t do?” The answer to that is “No.” But then at the same time, I don’t think the show has an edge that it needs to push, or would profit from pushing. It’s almost the opposite, in that the difficult thing is figuring out how to keep Rick from being Flanderized as a character that was a nihilist 10 years ago, where across an epoch of culture and TV, Rick was simply the guy saying, “By the way, God doesn’t exist” and having a cash register “Cha-ching!” from him saying that.  How do you keep House from not becoming pathetic on the 10th season of House if House has made people go, “I trust House because he’s such a crab-ass and he doesn’t care about your feelings when he diagnoses you!” I mean, you need to very delicately cultivate a House. So if you do care about the character, and value its outside perspective, it needs to be delicately changed to balance a changing ecosystem.  What a weird rambling answer to that question. But yeah, with Rick, it’s now like, “What if you’re kind of post-achievement? What if your nihilism isn’t going to pay the rent, as far as emotional relationships?” It’s not going to blow anyone’s mind, least of all his own. Where does that leave him? A new set of challenges. He’s still cynical, he’s still a nihilist. He’s still self-loathing, and filled with self-damage. Those things are wired into him. And yet he’s also acknowledged that other people are arbitrarily important to him. And so I guess we start there — that’s the only thing we can do to challenge ourselves.  Marder: I would say, just yes-anding Harmon, that’s sort of the light arc that runs through the season. Just kind of Rick living in a “retirement state.” What does he do now that this vendetta is over? He’s dealing with the family now, dealing with the Beths. That’s some of the stuff that we touch on lightly through it.  Which characters were you excited to see grow this season? Marder: I don’t think anyone had an agenda. It just kind of happened that we ended up finding a really neat Beth arc once Beth got split in two. It made her a way more intriguing character. One part of you literally gets to live the road less traveled, and this season really explores whether either of them are leading a happier life. Rick has to deal with being at the root of all that.  When we stumble onto something like a Jerry episode, like the Easter [one], that’s a treat, or Summer and the phone charger. She’s such an awesome character. It’s cool to see how she and Morty are evolving and becoming better at being the sidekick and handling themselves. It was cool watching her become a powerful CEO, then step back into her old life. We are very lucky that we’ve got a strong cast.  Are there any concepts in season 8 you’ve tried to get in the show for years and only now found a way? Harmon: My frustrating answer to that question is that the answer to that question is one that happens in season 9! [A thing] I’ve actually been wanting to do in television or in movies forever, and we figured out how to do it.  There are definitely things in every episode, but it’s hard to tell which ones. We have a shoebox of “Oh, this idea can’t be done now,” but it’s like a cow’s digestive system. Ideas for seasons just keep getting passed down. Marder: There are a few that are magnetic that we can’t crack, and that we kind of leave on the board, hoping that maybe a new guy will come in and see it comedically. I feel like every season, a new person will come in and see that we have “time loop” up on the board, and they’ll crack their knuckles and be like, “I’m going to break the time loop.” And then we all spend three days trying to break “time loop.” Then it goes back on the board, and we’re reminded why we don’t do time loops.  Harmon: That is so funny. That is the reality, and it’s funny how mythical it is. It’s like an island on a pre-Columbian map in a ship’s galley, and some new deckhand comes in going, “What’s the Galapagos?” And we’re like, “Yarr, you little piece of shit, sit down and I’ll tell you a tale!” And they’ll either be successfully warned off, or they’ll go, “I’m going to take it.” Marder: It’s always like, “I can’t remember why that one made it back on the board… I can’t remember why we couldn’t crack it…” And then three days later, you’re like, “I remember why we couldn’t crack it.” Now an eager young writer is seasoned and grizzled. “It was a mistake to go to the time loop.”
    0 Kommentare ·0 Geteilt ·0 Bewertungen
  • Consumer rights group: Why a 10-year ban on AI regulation will harm Americans

    This week, more than 140 civil rights and consumer protection organizations signed a letter to Congress opposing legislation that would preempt state and local laws governing artificial intelligencefor the next decade.

    House Republicans last week added a broad 10-year ban on state and local AI regulations to the Budget Reconciliation Bill that’s currently being debated in the House. The bill would prevent state and local oversight without providing federal alternatives.

    This year alone, about two-thirds of US states have proposed or enacted more than 500 laws governing AI technology. If passed, the federal bill would stop those laws from being enforced.

    The nonprofit Center for Democracy & Technologyjoined the other organizations in signing the opposition letter, which warns that removing AI protections leaves Americans vulnerable to current and emerging AI risks.

    Travis Hall, the CDT’s director for state engagement, answered questions posed by Computerworld to help determine the impact of the House Reconciliation Bill’s moratorium on AI regulations.

    Why is regulating AI important, and what are the potential dangers it poses without oversight? AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report.

    These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use.

    How do you regulate something as potentially ubiquitous as AI? There are multiple levels at which AI can be regulated. The first is through the application of sectoral laws and regulations, providing specific rules or guidance for particular use cases such as health, education, or public sector use. Regulations in these spaces are often already well established but need to be refined to adapt to the introduction of AI.

    The second is that there can be general rules regarding things like transparency and accountability, which incentivize responsible behavior across the AI chainand can ensure that core values like privacy and security are baked in.

    Why do you think the House Republicans have proposed banning states from regulating AI for such a long period of time? Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place.

    But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause.

    It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems. 

    Can you describe some of the state statutes you believe are most important to safeguarding Americans from potential AI harms? There are a range of statutes that would be overturned, including laws that govern how state and local officials themselves procure and use these technologies.

    Red and blue states alike — including Arkansas, Kentucky, and Montana — have passed bills governing the public sector’s AI procurement and use. Several states, including Colorado, Illinois, and Utah, have consumer protection and civil rights laws governing AI or automated decision systems.

    This bill undermines states’ ability to enforce longstanding laws that protect their residents or to clarify how they should apply to these new technologies.

    Sen. Ted Cruz, R-Texas, warns that a patchwork of state AI laws causes confusion. But should a single federal rule apply equally to rural towns and tech hubs? How can we balance national standards with local needs? The blanket preemption assumes that all of these communities are best served with no governance of AI or automated decision systems — or, more cynically, that the short-term financial interests of companies that develop and deploy AI tools should take precedence over the civil rights and economic interests of ordinary people.

    While there can be a reasoned discussion about what issues need uniform rules across the country and which allow flexibility for state and local officials to set rules, what is being proposed is a blanket ban on state and local rules with no federal regulations in place. 

    Further, we have not seen, nor are we likely to see, a significant “patchwork” of protections throughout the country. The same arguments were made in the state privacy context as well, by, with one exception, states that have passed identical or nearly-identical laws, mostly written by industry. Preempting state laws to avoid a patchwork system that’s unlikely to ever exist is simply bad policy and will cause more needless harm to consumers.

    Proponents of the state AI regulation moratorium have compared it to the Internet Tax Freedom Act — the “internet tax moratorium,” which helped the internet flourish in its early days. Why don’t you believe the same could be true for AI? There are a couple of key differences between the Internet Tax Freedom Act and the proposed moratorium. 

    First, what was being developed in the 1990s was a unified, connected, global internet. Splintering the internet into silos wasa real danger to the fundamental feature of the platform that allowed it to thrive. The same is not true for AI systems and models, which are a diverse set of technologies and services which are regularly customized to respond to particular use cases and needs. Having diverse sets of regulatory responsibilities is not the same threat to AI the way that it was to the nascent internet.

    Second, removal of potential taxation as a means of spurring commerce is wholly different from removing consumer protections. The former encourages participation by lowering prices, while the latter adds significant cost in the form of dealing with fraud, abuse, and real-world harm. 

    In short, there is a massive difference between stating that an ill-defined suite of technologies is off limits from any type of intervention at the state and local level and trying to help bolster a nascent and global platform through tax incentives.
    #consumer #rights #group #why #10year
    Consumer rights group: Why a 10-year ban on AI regulation will harm Americans
    This week, more than 140 civil rights and consumer protection organizations signed a letter to Congress opposing legislation that would preempt state and local laws governing artificial intelligencefor the next decade. House Republicans last week added a broad 10-year ban on state and local AI regulations to the Budget Reconciliation Bill that’s currently being debated in the House. The bill would prevent state and local oversight without providing federal alternatives. This year alone, about two-thirds of US states have proposed or enacted more than 500 laws governing AI technology. If passed, the federal bill would stop those laws from being enforced. The nonprofit Center for Democracy & Technologyjoined the other organizations in signing the opposition letter, which warns that removing AI protections leaves Americans vulnerable to current and emerging AI risks. Travis Hall, the CDT’s director for state engagement, answered questions posed by Computerworld to help determine the impact of the House Reconciliation Bill’s moratorium on AI regulations. Why is regulating AI important, and what are the potential dangers it poses without oversight? AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report. These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use. How do you regulate something as potentially ubiquitous as AI? There are multiple levels at which AI can be regulated. The first is through the application of sectoral laws and regulations, providing specific rules or guidance for particular use cases such as health, education, or public sector use. Regulations in these spaces are often already well established but need to be refined to adapt to the introduction of AI. The second is that there can be general rules regarding things like transparency and accountability, which incentivize responsible behavior across the AI chainand can ensure that core values like privacy and security are baked in. Why do you think the House Republicans have proposed banning states from regulating AI for such a long period of time? Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place. But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause. It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems.  Can you describe some of the state statutes you believe are most important to safeguarding Americans from potential AI harms? There are a range of statutes that would be overturned, including laws that govern how state and local officials themselves procure and use these technologies. Red and blue states alike — including Arkansas, Kentucky, and Montana — have passed bills governing the public sector’s AI procurement and use. Several states, including Colorado, Illinois, and Utah, have consumer protection and civil rights laws governing AI or automated decision systems. This bill undermines states’ ability to enforce longstanding laws that protect their residents or to clarify how they should apply to these new technologies. Sen. Ted Cruz, R-Texas, warns that a patchwork of state AI laws causes confusion. But should a single federal rule apply equally to rural towns and tech hubs? How can we balance national standards with local needs? The blanket preemption assumes that all of these communities are best served with no governance of AI or automated decision systems — or, more cynically, that the short-term financial interests of companies that develop and deploy AI tools should take precedence over the civil rights and economic interests of ordinary people. While there can be a reasoned discussion about what issues need uniform rules across the country and which allow flexibility for state and local officials to set rules, what is being proposed is a blanket ban on state and local rules with no federal regulations in place.  Further, we have not seen, nor are we likely to see, a significant “patchwork” of protections throughout the country. The same arguments were made in the state privacy context as well, by, with one exception, states that have passed identical or nearly-identical laws, mostly written by industry. Preempting state laws to avoid a patchwork system that’s unlikely to ever exist is simply bad policy and will cause more needless harm to consumers. Proponents of the state AI regulation moratorium have compared it to the Internet Tax Freedom Act — the “internet tax moratorium,” which helped the internet flourish in its early days. Why don’t you believe the same could be true for AI? There are a couple of key differences between the Internet Tax Freedom Act and the proposed moratorium.  First, what was being developed in the 1990s was a unified, connected, global internet. Splintering the internet into silos wasa real danger to the fundamental feature of the platform that allowed it to thrive. The same is not true for AI systems and models, which are a diverse set of technologies and services which are regularly customized to respond to particular use cases and needs. Having diverse sets of regulatory responsibilities is not the same threat to AI the way that it was to the nascent internet. Second, removal of potential taxation as a means of spurring commerce is wholly different from removing consumer protections. The former encourages participation by lowering prices, while the latter adds significant cost in the form of dealing with fraud, abuse, and real-world harm.  In short, there is a massive difference between stating that an ill-defined suite of technologies is off limits from any type of intervention at the state and local level and trying to help bolster a nascent and global platform through tax incentives. #consumer #rights #group #why #10year
    Consumer rights group: Why a 10-year ban on AI regulation will harm Americans
    www.computerworld.com
    This week, more than 140 civil rights and consumer protection organizations signed a letter to Congress opposing legislation that would preempt state and local laws governing artificial intelligence (AI) for the next decade. House Republicans last week added a broad 10-year ban on state and local AI regulations to the Budget Reconciliation Bill that’s currently being debated in the House. The bill would prevent state and local oversight without providing federal alternatives. This year alone, about two-thirds of US states have proposed or enacted more than 500 laws governing AI technology. If passed, the federal bill would stop those laws from being enforced. The nonprofit Center for Democracy & Technology (CDT) joined the other organizations in signing the opposition letter, which warns that removing AI protections leaves Americans vulnerable to current and emerging AI risks. Travis Hall, the CDT’s director for state engagement, answered questions posed by Computerworld to help determine the impact of the House Reconciliation Bill’s moratorium on AI regulations. Why is regulating AI important, and what are the potential dangers it poses without oversight? AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report. These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use. How do you regulate something as potentially ubiquitous as AI? There are multiple levels at which AI can be regulated. The first is through the application of sectoral laws and regulations, providing specific rules or guidance for particular use cases such as health, education, or public sector use. Regulations in these spaces are often already well established but need to be refined to adapt to the introduction of AI. The second is that there can be general rules regarding things like transparency and accountability, which incentivize responsible behavior across the AI chain (developers, deployers, users) and can ensure that core values like privacy and security are baked in. Why do you think the House Republicans have proposed banning states from regulating AI for such a long period of time? Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place. But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause. It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems.  Can you describe some of the state statutes you believe are most important to safeguarding Americans from potential AI harms? There are a range of statutes that would be overturned, including laws that govern how state and local officials themselves procure and use these technologies. Red and blue states alike — including Arkansas, Kentucky, and Montana — have passed bills governing the public sector’s AI procurement and use. Several states, including Colorado, Illinois, and Utah, have consumer protection and civil rights laws governing AI or automated decision systems. This bill undermines states’ ability to enforce longstanding laws that protect their residents or to clarify how they should apply to these new technologies. Sen. Ted Cruz, R-Texas, warns that a patchwork of state AI laws causes confusion. But should a single federal rule apply equally to rural towns and tech hubs? How can we balance national standards with local needs? The blanket preemption assumes that all of these communities are best served with no governance of AI or automated decision systems — or, more cynically, that the short-term financial interests of companies that develop and deploy AI tools should take precedence over the civil rights and economic interests of ordinary people. While there can be a reasoned discussion about what issues need uniform rules across the country and which allow flexibility for state and local officials to set rules (an easy one would be regarding their own procurement of systems), what is being proposed is a blanket ban on state and local rules with no federal regulations in place.  Further, we have not seen, nor are we likely to see, a significant “patchwork” of protections throughout the country. The same arguments were made in the state privacy context as well, by, with one exception, states that have passed identical or nearly-identical laws, mostly written by industry. Preempting state laws to avoid a patchwork system that’s unlikely to ever exist is simply bad policy and will cause more needless harm to consumers. Proponents of the state AI regulation moratorium have compared it to the Internet Tax Freedom Act — the “internet tax moratorium,” which helped the internet flourish in its early days. Why don’t you believe the same could be true for AI? There are a couple of key differences between the Internet Tax Freedom Act and the proposed moratorium.  First, what was being developed in the 1990s was a unified, connected, global internet. Splintering the internet into silos was (and, to be frank, still is) a real danger to the fundamental feature of the platform that allowed it to thrive. The same is not true for AI systems and models, which are a diverse set of technologies and services which are regularly customized to respond to particular use cases and needs. Having diverse sets of regulatory responsibilities is not the same threat to AI the way that it was to the nascent internet. Second, removal of potential taxation as a means of spurring commerce is wholly different from removing consumer protections. The former encourages participation by lowering prices, while the latter adds significant cost in the form of dealing with fraud, abuse, and real-world harm.  In short, there is a massive difference between stating that an ill-defined suite of technologies is off limits from any type of intervention at the state and local level and trying to help bolster a nascent and global platform through tax incentives.
    0 Kommentare ·0 Geteilt ·0 Bewertungen
CGShares https://cgshares.com