• How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • McDonald's in Trouble as Ozempic Takes Hold

    Image by Getty / FuturismRx/MedicinesBroken ice cream machines aren't the only thing bedeviling stalwart fast food chain McDonald's.Financial services firm Redburn Atlantic put the company's stock in the bear category, coinciding with a slumpy week in which it lost about three percent of its value — because analysts are betting that GLP-1 agonist weight loss drugs like Ozempic are going to disrupt the fast food business model, CBS News reports.The eyebrow-raising conclusion comes as the analysts reason that people with lower incomes who go on the drugs will tend to shun food outside the home. Meanwhile, people at a higher income level who take Ozempic and similar go back to their food spending habits after a year or so."Behaviour changes extend beyond the individual user — reshaping group dining, influencing household routines and softening habitual demand," wrote the analysts, as reported by CBS. "A 1 percent drag today could easily build to 10 percent or more over time, particularly for brands skewed toward lower income consumers or group occasions."This could have a huge impact on the bottom line of fast food chains like McDonald's, which could stand to lose as much as million annually as they see the disappearance of 28 million visits from formerly hungry customers.This is all complete speculation at this point, because only about six percent of American adults are currently taking these weight loss medications. And they're prohibitively expensive, prices starting at around per month, meaning that extremely few poor people are currently able to afford them.But there's a movement by some policymakers to lower the price of the drugs, which have been proven to not just help people lose weight, but they come with a rash of benefits from preventing certain cancers to treating addictions, among other positives.So if lawmakers force a reduction in price in the future, expect fast food chains like McDonald's to be left holding the bag.And maybe that's a good thing, because the kind of fried foods that McDonald's traffics in are just plain bad for your health.More on Ozempic: Doctors Concerned by Massive Uptick in Teens Taking OzempicShare This Article
    #mcdonald039s #trouble #ozempic #takes #hold
    McDonald's in Trouble as Ozempic Takes Hold
    Image by Getty / FuturismRx/MedicinesBroken ice cream machines aren't the only thing bedeviling stalwart fast food chain McDonald's.Financial services firm Redburn Atlantic put the company's stock in the bear category, coinciding with a slumpy week in which it lost about three percent of its value — because analysts are betting that GLP-1 agonist weight loss drugs like Ozempic are going to disrupt the fast food business model, CBS News reports.The eyebrow-raising conclusion comes as the analysts reason that people with lower incomes who go on the drugs will tend to shun food outside the home. Meanwhile, people at a higher income level who take Ozempic and similar go back to their food spending habits after a year or so."Behaviour changes extend beyond the individual user — reshaping group dining, influencing household routines and softening habitual demand," wrote the analysts, as reported by CBS. "A 1 percent drag today could easily build to 10 percent or more over time, particularly for brands skewed toward lower income consumers or group occasions."This could have a huge impact on the bottom line of fast food chains like McDonald's, which could stand to lose as much as million annually as they see the disappearance of 28 million visits from formerly hungry customers.This is all complete speculation at this point, because only about six percent of American adults are currently taking these weight loss medications. And they're prohibitively expensive, prices starting at around per month, meaning that extremely few poor people are currently able to afford them.But there's a movement by some policymakers to lower the price of the drugs, which have been proven to not just help people lose weight, but they come with a rash of benefits from preventing certain cancers to treating addictions, among other positives.So if lawmakers force a reduction in price in the future, expect fast food chains like McDonald's to be left holding the bag.And maybe that's a good thing, because the kind of fried foods that McDonald's traffics in are just plain bad for your health.More on Ozempic: Doctors Concerned by Massive Uptick in Teens Taking OzempicShare This Article #mcdonald039s #trouble #ozempic #takes #hold
    FUTURISM.COM
    McDonald's in Trouble as Ozempic Takes Hold
    Image by Getty / FuturismRx/MedicinesBroken ice cream machines aren't the only thing bedeviling stalwart fast food chain McDonald's.Financial services firm Redburn Atlantic put the company's stock in the bear category, coinciding with a slumpy week in which it lost about three percent of its value — because analysts are betting that GLP-1 agonist weight loss drugs like Ozempic are going to disrupt the fast food business model, CBS News reports.The eyebrow-raising conclusion comes as the analysts reason that people with lower incomes who go on the drugs will tend to shun food outside the home. Meanwhile, people at a higher income level who take Ozempic and similar go back to their food spending habits after a year or so."Behaviour changes extend beyond the individual user — reshaping group dining, influencing household routines and softening habitual demand," wrote the analysts, as reported by CBS. "A 1 percent drag today could easily build to 10 percent or more over time, particularly for brands skewed toward lower income consumers or group occasions."This could have a huge impact on the bottom line of fast food chains like McDonald's, which could stand to lose as much as $482 million annually as they see the disappearance of 28 million visits from formerly hungry customers.This is all complete speculation at this point, because only about six percent of American adults are currently taking these weight loss medications. And they're prohibitively expensive, prices starting at around $900 per month, meaning that extremely few poor people are currently able to afford them.But there's a movement by some policymakers to lower the price of the drugs, which have been proven to not just help people lose weight, but they come with a rash of benefits from preventing certain cancers to treating addictions, among other positives.So if lawmakers force a reduction in price in the future, expect fast food chains like McDonald's to be left holding the bag.And maybe that's a good thing, because the kind of fried foods that McDonald's traffics in are just plain bad for your health.More on Ozempic: Doctors Concerned by Massive Uptick in Teens Taking OzempicShare This Article
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Indie App Spotlight: ‘Pill Buddy’ is a really fun way to keep track of your medicines

    Welcome to Indie App Spotlight. This is a weekly 9to5Mac series where we showcase the latest apps in the indie app world. If you’re a developer and would like your app featured, get in contact.

    Have you ever wanted a medicine tracker that keeps you motivated, is interactive, and fun to use? Pill Buddy is the answer for you. Designed by a former Duolingo Product Manager, Pill Buddy truly encapsulates everything engaging and applies it to yet another pesky chore: medicine.

    Top features
    Pill Buddy has all of the basic features you’d expect in a medicine tracking app. You can log your medicines, receive reminders, and keep track of everything from convenient home screen widgets. Pill Buddy actually takes reminders a step further, and has an option to give you an actual phone call when you need it.
    Beyond the basics, Pill Buddy has a number of features to keep you hooked. For one, you have a personal mascot in the app. When you keep on track with your doses, you earn stars. When you miss a dose, your mascot will look sad.
    If you stay on top of things though, you’ll build a streak – all while continuing to earn stars for your mascot.
    It’s meant to feel personal, motivating, and fun. Pill Buddy also lets you customize your schedule for each medicine while you’re setting it up, so the app adopts to your needs.
    Pill Buddy is available for free on the App Store for iPhones running iOS 18.1 or later. It’s also available on macOS and visionOS as an iOS app. The app has no ads.
    The developer, Kai, left his full time job at Duolingo to pursue indie development as a full-time gig – so if this app is something you’ve been looking for, give it a go! You can also check out the apps website here.

    My favorite Apple accessory recommendations:
    Follow Michael: X/Twitter, Bluesky, Instagram

    Add 9to5Mac to your Google News feed. 

    FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    #indie #app #spotlight #pill #buddy
    Indie App Spotlight: ‘Pill Buddy’ is a really fun way to keep track of your medicines
    Welcome to Indie App Spotlight. This is a weekly 9to5Mac series where we showcase the latest apps in the indie app world. If you’re a developer and would like your app featured, get in contact. Have you ever wanted a medicine tracker that keeps you motivated, is interactive, and fun to use? Pill Buddy is the answer for you. Designed by a former Duolingo Product Manager, Pill Buddy truly encapsulates everything engaging and applies it to yet another pesky chore: medicine. Top features Pill Buddy has all of the basic features you’d expect in a medicine tracking app. You can log your medicines, receive reminders, and keep track of everything from convenient home screen widgets. Pill Buddy actually takes reminders a step further, and has an option to give you an actual phone call when you need it. Beyond the basics, Pill Buddy has a number of features to keep you hooked. For one, you have a personal mascot in the app. When you keep on track with your doses, you earn stars. When you miss a dose, your mascot will look sad. If you stay on top of things though, you’ll build a streak – all while continuing to earn stars for your mascot. It’s meant to feel personal, motivating, and fun. Pill Buddy also lets you customize your schedule for each medicine while you’re setting it up, so the app adopts to your needs. Pill Buddy is available for free on the App Store for iPhones running iOS 18.1 or later. It’s also available on macOS and visionOS as an iOS app. The app has no ads. The developer, Kai, left his full time job at Duolingo to pursue indie development as a full-time gig – so if this app is something you’ve been looking for, give it a go! You can also check out the apps website here. My favorite Apple accessory recommendations: Follow Michael: X/Twitter, Bluesky, Instagram Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel #indie #app #spotlight #pill #buddy
    9TO5MAC.COM
    Indie App Spotlight: ‘Pill Buddy’ is a really fun way to keep track of your medicines
    Welcome to Indie App Spotlight. This is a weekly 9to5Mac series where we showcase the latest apps in the indie app world. If you’re a developer and would like your app featured, get in contact. Have you ever wanted a medicine tracker that keeps you motivated, is interactive, and fun to use? Pill Buddy is the answer for you. Designed by a former Duolingo Product Manager, Pill Buddy truly encapsulates everything engaging and applies it to yet another pesky chore: medicine. Top features Pill Buddy has all of the basic features you’d expect in a medicine tracking app. You can log your medicines, receive reminders, and keep track of everything from convenient home screen widgets. Pill Buddy actually takes reminders a step further, and has an option to give you an actual phone call when you need it. Beyond the basics, Pill Buddy has a number of features to keep you hooked. For one, you have a personal mascot in the app. When you keep on track with your doses, you earn stars (which can be used to buy items to personalize your mascot). When you miss a dose (or continually do so), your mascot will look sad. If you stay on top of things though, you’ll build a streak – all while continuing to earn stars for your mascot. It’s meant to feel personal, motivating, and fun. Pill Buddy also lets you customize your schedule for each medicine while you’re setting it up, so the app adopts to your needs. Pill Buddy is available for free on the App Store for iPhones running iOS 18.1 or later. It’s also available on macOS and visionOS as an iOS app. The app has no ads. The developer, Kai, left his full time job at Duolingo to pursue indie development as a full-time gig – so if this app is something you’ve been looking for, give it a go! You can also check out the apps website here. My favorite Apple accessory recommendations: Follow Michael: X/Twitter, Bluesky, Instagram Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    Like
    Love
    Wow
    Sad
    Angry
    638
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • We can reshore American manufacturing

    In my last Fast Company column, I shared my reasons for manufacturing my electric trucks in the U.S. I’m not alone. While near-shoring to North America has been underway for several years, the current tariff shifts and shipping complexities make U.S. manufacturing a higher priority still.

    However, there were 292,825 factories in the U.S. as of 2021. Of those, 846 employ 1,000 people or more. Some of these are my engineering firm’s clients, giving us a front row seat to the complexity of retrofitting an existing factory to full U.S. manufacturing. While building a new factory is expensive and lengthy, these companies’ tasks are more difficult still.

    There are good reasons for making the shift as quickly as possible. Moving to most or fully U.S. manufacturing brings higher visibility, faster response time, and higher resilience to supply chain disruption, as well as greater protection from tariff shifts and geopolitical change.

    But if you’re early in the process, here’s my advice for your transition:

    Determine a priority ranking for the refining and raw materials you shift to in-country and North American sourcing. Give highest ranking to categories including defense, high value items, and consumer safety items.

    Be more strategic in the offshore suppliers you continue using for non-advanced manufacturing by prioritizing closer and more geographic-friendly locations for production and shipment such as Mexico and Argentina.

    Utilize government-backed capital, where possible, for extracting/mining minerals and metals such as lithium, red mud, magnesium, etc.

    Beyond the high-ranking product categories, move to domestic suppliers for primary materials such as steel, aluminum, cement, and plastics. Likewise, reduce offshoring of technical staff as well as raw materials, where possible.

    Use all means possible to become power independent through solar production, micro-grids, and nuclear power production.

    Consider creating a 4-year completion bonus for military vets. Hire vets wherever possible, as they make great workers and entrepreneurs.

    Likewise, we can press for future policy changes that best support Made in America manufacturing, as follows:

    Encourage ship building in the U.S., as well as creating new means of automated freight transit.

    Work towards transformation plans for government-funded R&D to include more attractive loans, rebates, and grants, as well as programs for tax-free status for intellectual property during commercialization, to incent and support organizations making the shift.

    Consider energy rebates to U.S. manufacturers and distributors to make American manufacturing more cost-effective and viable.  

    Create policies to include the cost of offshore staff in tariff calculations. Expand trade relationships with Caribbean nations for products such as sugar, avocados, bananas, etc.

    Avoid or even ban foreign ownership of the food supply chain.

    Create fair competition for government contracting.

    Make health supplements and homeopathic medicines tax deductible, to promote a healthy workforce.

    While it may not be readily evident, these policy changes are related to successful reshoring. In all, we need larger scale, lower costs, and more automated and simplified mechanisms for product manufacturing. These issues, in my experience, are as equally important as the raw materials we require. We need increased support for niche manufacturing. In my opinion, we also need deregulation, and increased access to land.

    I believe we need better education, self-reliance, health, and incentive structures to get the capital, entrepreneurs, and workers for Made in America manufacturing. Who’s with me?

    Matthew Chang is the founding partner of Chang Robotics.
    #can #reshore #american #manufacturing
    We can reshore American manufacturing
    In my last Fast Company column, I shared my reasons for manufacturing my electric trucks in the U.S. I’m not alone. While near-shoring to North America has been underway for several years, the current tariff shifts and shipping complexities make U.S. manufacturing a higher priority still. However, there were 292,825 factories in the U.S. as of 2021. Of those, 846 employ 1,000 people or more. Some of these are my engineering firm’s clients, giving us a front row seat to the complexity of retrofitting an existing factory to full U.S. manufacturing. While building a new factory is expensive and lengthy, these companies’ tasks are more difficult still. There are good reasons for making the shift as quickly as possible. Moving to most or fully U.S. manufacturing brings higher visibility, faster response time, and higher resilience to supply chain disruption, as well as greater protection from tariff shifts and geopolitical change. But if you’re early in the process, here’s my advice for your transition: Determine a priority ranking for the refining and raw materials you shift to in-country and North American sourcing. Give highest ranking to categories including defense, high value items, and consumer safety items. Be more strategic in the offshore suppliers you continue using for non-advanced manufacturing by prioritizing closer and more geographic-friendly locations for production and shipment such as Mexico and Argentina. Utilize government-backed capital, where possible, for extracting/mining minerals and metals such as lithium, red mud, magnesium, etc. Beyond the high-ranking product categories, move to domestic suppliers for primary materials such as steel, aluminum, cement, and plastics. Likewise, reduce offshoring of technical staff as well as raw materials, where possible. Use all means possible to become power independent through solar production, micro-grids, and nuclear power production. Consider creating a 4-year completion bonus for military vets. Hire vets wherever possible, as they make great workers and entrepreneurs. Likewise, we can press for future policy changes that best support Made in America manufacturing, as follows: Encourage ship building in the U.S., as well as creating new means of automated freight transit. Work towards transformation plans for government-funded R&D to include more attractive loans, rebates, and grants, as well as programs for tax-free status for intellectual property during commercialization, to incent and support organizations making the shift. Consider energy rebates to U.S. manufacturers and distributors to make American manufacturing more cost-effective and viable.   Create policies to include the cost of offshore staff in tariff calculations. Expand trade relationships with Caribbean nations for products such as sugar, avocados, bananas, etc. Avoid or even ban foreign ownership of the food supply chain. Create fair competition for government contracting. Make health supplements and homeopathic medicines tax deductible, to promote a healthy workforce. While it may not be readily evident, these policy changes are related to successful reshoring. In all, we need larger scale, lower costs, and more automated and simplified mechanisms for product manufacturing. These issues, in my experience, are as equally important as the raw materials we require. We need increased support for niche manufacturing. In my opinion, we also need deregulation, and increased access to land. I believe we need better education, self-reliance, health, and incentive structures to get the capital, entrepreneurs, and workers for Made in America manufacturing. Who’s with me? Matthew Chang is the founding partner of Chang Robotics. #can #reshore #american #manufacturing
    WWW.FASTCOMPANY.COM
    We can reshore American manufacturing
    In my last Fast Company column, I shared my reasons for manufacturing my electric trucks in the U.S. I’m not alone. While near-shoring to North America has been underway for several years, the current tariff shifts and shipping complexities make U.S. manufacturing a higher priority still. However, there were 292,825 factories in the U.S. as of 2021. Of those, 846 employ 1,000 people or more. Some of these are my engineering firm’s clients, giving us a front row seat to the complexity of retrofitting an existing factory to full U.S. manufacturing. While building a new factory is expensive and lengthy, these companies’ tasks are more difficult still. There are good reasons for making the shift as quickly as possible. Moving to most or fully U.S. manufacturing brings higher visibility, faster response time, and higher resilience to supply chain disruption, as well as greater protection from tariff shifts and geopolitical change. But if you’re early in the process, here’s my advice for your transition: Determine a priority ranking for the refining and raw materials you shift to in-country and North American sourcing. Give highest ranking to categories including defense, high value items (such as steel, aluminum, and rare minerals, etc.), and consumer safety items (such as pharmaceutical components, etc.). Be more strategic in the offshore suppliers you continue using for non-advanced manufacturing by prioritizing closer and more geographic-friendly locations for production and shipment such as Mexico and Argentina. Utilize government-backed capital, where possible, for extracting/mining minerals and metals such as lithium, red mud, magnesium, etc. Beyond the high-ranking product categories, move to domestic suppliers for primary materials such as steel, aluminum, cement, and plastics. Likewise, reduce offshoring of technical staff as well as raw materials, where possible. Use all means possible to become power independent through solar production, micro-grids, and nuclear power production. Consider creating a 4-year completion bonus for military vets. Hire vets wherever possible, as they make great workers and entrepreneurs. Likewise, we can press for future policy changes that best support Made in America manufacturing, as follows: Encourage ship building in the U.S., as well as creating new means of automated freight transit. Work towards transformation plans for government-funded R&D to include more attractive loans, rebates, and grants, as well as programs for tax-free status for intellectual property during commercialization, to incent and support organizations making the shift. Consider energy rebates to U.S. manufacturers and distributors to make American manufacturing more cost-effective and viable.   Create policies to include the cost of offshore staff in tariff calculations. Expand trade relationships with Caribbean nations for products such as sugar, avocados, bananas, etc. Avoid or even ban foreign ownership of the food supply chain. Create fair competition for government contracting. Make health supplements and homeopathic medicines tax deductible, to promote a healthy workforce. While it may not be readily evident, these policy changes are related to successful reshoring. In all, we need larger scale, lower costs, and more automated and simplified mechanisms for product manufacturing. These issues, in my experience, are as equally important as the raw materials we require. We need increased support for niche manufacturing. In my opinion, we also need deregulation, and increased access to land (particularly in the west; the federal government owns great quantities of the available land, which is choking available supply). I believe we need better education, self-reliance, health, and incentive structures to get the capital, entrepreneurs, and workers for Made in America manufacturing. Who’s with me? Matthew Chang is the founding partner of Chang Robotics.
    Like
    Love
    Wow
    Sad
    Angry
    358
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • 400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain Tumors

    Tumor Has ItJun 1, 10:00 AM EDT / by Noor Al-Sibai400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain TumorsThe pharmaceutical giant allegedly knew about the risks... but didn't warn patients.Jun 1, 10:00 AM EDT / Noor Al-SibaiImage by Beata Zawrzel / NurPhoto via Getty / FuturismRx/MedicinesRecent research has linked Pfizer's widely-used Depo-Provera birth control shot to massively increased risk of developing brain tumors — and hundreds of women are suing the pharmaceutical giant over it.According to a press release filed on behalf of the roughly 400 plaintiffs in the class action suit, the lawsuit claims that Pfizer and other companies that made generic versions of the injectable contraceptive knew of the link between the shot and the dangerous tumors, but didn't properly warn users.The suit follows a study published by the British Medical Journal last year that found that people who took the progestin-based shot for a year or more were up to 5.6 times more likely to develop meningioma, a slow-building brain tumor that forms, per the Cleveland Clinic, on the meninges, or layers of tissue that covers the brain and spinal cord.Though Pfizer attached warning labels about meningioma to Depo-Provera sold in Canada in 2015 and the UK, Europe, and South Africa after the 2024 study was published, no such label was deployed in the United States — a failure which according to the lawsuit is "inconsistentglobal safety standards."In an interview with the website DrugWatch, one of the suit's plaintiffs, who was identified by the initials TC, said that she had been "told how great Depo-Provera was" and decided to start it after an unplanned pregnancy that occurred when she'd taken the since-discontinued birth control pill Ortho Tri-Cyclen Lo."I thought it would be more reliable and convenient since I wouldn’t have to take it daily," TC told the site, referencing the four annual injections Depo-Provera requires. "I had no idea it would lead to such serious health problems."After being on the contraceptive shot for three years — and experiencing intense headaches, months-long uterine bleeding, and weight gain — the woman finally consulted her doctor and was diagnosed with meningioma. She's since been undergoing treatment and experienced some relief, but even that experience has been "physically and emotionally draining" because she has to get regular MRIs to monitor the tumor, which likely isn't fatal but still greatly affects her quality of life."It’s a constant worry that the tumor might grow," TC said, "and the appointments feel never-ending."That fear was echoed by others who spoke to the Daily Mail about their meningioma diagnoses after taking Depo-Provera. Unlike TC, Andrea Faulks of Alabama hadn't been on the shots for years when she learned of her brain tumors, which caused her years of anguish.Faulks told the British website that she'd begun taking the medication back in 1993, the year after it was approved by the FDA in the United States. She stopped taking it only a few years later, but spent decades having splitting headaches and experiencing dizziness and tremors. After being dismissed by no fewer than six doctors, the woman finally got an MRI last summer and learned that she had a brain tumor — and is now undergoing radiation to shrink it after all this time."I know this is something I'm going to have to live with for the rest of my life, as long as I live," Faulks told the Daily Mail.Currently, the class action case against Pfizer on behalf of women like Faulks and TC is in its earliest stages as attorneys representing those hundreds of women with brain tumors start working to make them whole.Even if they receive adequate payouts, however, that money won't take away their suffering, or give them back the years of their life lost to tumors they should have been warned about.Share This ArticleImage by Beata Zawrzel / NurPhoto via Getty / FuturismRead This Next
    #women #are #suing #pfizer #over
    400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain Tumors
    Tumor Has ItJun 1, 10:00 AM EDT / by Noor Al-Sibai400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain TumorsThe pharmaceutical giant allegedly knew about the risks... but didn't warn patients.Jun 1, 10:00 AM EDT / Noor Al-SibaiImage by Beata Zawrzel / NurPhoto via Getty / FuturismRx/MedicinesRecent research has linked Pfizer's widely-used Depo-Provera birth control shot to massively increased risk of developing brain tumors — and hundreds of women are suing the pharmaceutical giant over it.According to a press release filed on behalf of the roughly 400 plaintiffs in the class action suit, the lawsuit claims that Pfizer and other companies that made generic versions of the injectable contraceptive knew of the link between the shot and the dangerous tumors, but didn't properly warn users.The suit follows a study published by the British Medical Journal last year that found that people who took the progestin-based shot for a year or more were up to 5.6 times more likely to develop meningioma, a slow-building brain tumor that forms, per the Cleveland Clinic, on the meninges, or layers of tissue that covers the brain and spinal cord.Though Pfizer attached warning labels about meningioma to Depo-Provera sold in Canada in 2015 and the UK, Europe, and South Africa after the 2024 study was published, no such label was deployed in the United States — a failure which according to the lawsuit is "inconsistentglobal safety standards."In an interview with the website DrugWatch, one of the suit's plaintiffs, who was identified by the initials TC, said that she had been "told how great Depo-Provera was" and decided to start it after an unplanned pregnancy that occurred when she'd taken the since-discontinued birth control pill Ortho Tri-Cyclen Lo."I thought it would be more reliable and convenient since I wouldn’t have to take it daily," TC told the site, referencing the four annual injections Depo-Provera requires. "I had no idea it would lead to such serious health problems."After being on the contraceptive shot for three years — and experiencing intense headaches, months-long uterine bleeding, and weight gain — the woman finally consulted her doctor and was diagnosed with meningioma. She's since been undergoing treatment and experienced some relief, but even that experience has been "physically and emotionally draining" because she has to get regular MRIs to monitor the tumor, which likely isn't fatal but still greatly affects her quality of life."It’s a constant worry that the tumor might grow," TC said, "and the appointments feel never-ending."That fear was echoed by others who spoke to the Daily Mail about their meningioma diagnoses after taking Depo-Provera. Unlike TC, Andrea Faulks of Alabama hadn't been on the shots for years when she learned of her brain tumors, which caused her years of anguish.Faulks told the British website that she'd begun taking the medication back in 1993, the year after it was approved by the FDA in the United States. She stopped taking it only a few years later, but spent decades having splitting headaches and experiencing dizziness and tremors. After being dismissed by no fewer than six doctors, the woman finally got an MRI last summer and learned that she had a brain tumor — and is now undergoing radiation to shrink it after all this time."I know this is something I'm going to have to live with for the rest of my life, as long as I live," Faulks told the Daily Mail.Currently, the class action case against Pfizer on behalf of women like Faulks and TC is in its earliest stages as attorneys representing those hundreds of women with brain tumors start working to make them whole.Even if they receive adequate payouts, however, that money won't take away their suffering, or give them back the years of their life lost to tumors they should have been warned about.Share This ArticleImage by Beata Zawrzel / NurPhoto via Getty / FuturismRead This Next #women #are #suing #pfizer #over
    FUTURISM.COM
    400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain Tumors
    Tumor Has ItJun 1, 10:00 AM EDT / by Noor Al-Sibai400 Women Are Suing Pfizer Over Birth Control Shot That Allegedly Gave Them Brain TumorsThe pharmaceutical giant allegedly knew about the risks... but didn't warn patients.Jun 1, 10:00 AM EDT / Noor Al-SibaiImage by Beata Zawrzel / NurPhoto via Getty / FuturismRx/MedicinesRecent research has linked Pfizer's widely-used Depo-Provera birth control shot to massively increased risk of developing brain tumors — and hundreds of women are suing the pharmaceutical giant over it.According to a press release filed on behalf of the roughly 400 plaintiffs in the class action suit, the lawsuit claims that Pfizer and other companies that made generic versions of the injectable contraceptive knew of the link between the shot and the dangerous tumors, but didn't properly warn users.The suit follows a study published by the British Medical Journal last year that found that people who took the progestin-based shot for a year or more were up to 5.6 times more likely to develop meningioma, a slow-building brain tumor that forms, per the Cleveland Clinic, on the meninges, or layers of tissue that covers the brain and spinal cord.Though Pfizer attached warning labels about meningioma to Depo-Provera sold in Canada in 2015 and the UK, Europe, and South Africa after the 2024 study was published, no such label was deployed in the United States — a failure which according to the lawsuit is "inconsistent [with] global safety standards."In an interview with the website DrugWatch, one of the suit's plaintiffs, who was identified by the initials TC, said that she had been "told how great Depo-Provera was" and decided to start it after an unplanned pregnancy that occurred when she'd taken the since-discontinued birth control pill Ortho Tri-Cyclen Lo."I thought it would be more reliable and convenient since I wouldn’t have to take it daily," TC told the site, referencing the four annual injections Depo-Provera requires. "I had no idea it would lead to such serious health problems."After being on the contraceptive shot for three years — and experiencing intense headaches, months-long uterine bleeding, and weight gain — the woman finally consulted her doctor and was diagnosed with meningioma. She's since been undergoing treatment and experienced some relief, but even that experience has been "physically and emotionally draining" because she has to get regular MRIs to monitor the tumor, which likely isn't fatal but still greatly affects her quality of life."It’s a constant worry that the tumor might grow," TC said, "and the appointments feel never-ending."That fear was echoed by others who spoke to the Daily Mail about their meningioma diagnoses after taking Depo-Provera. Unlike TC, Andrea Faulks of Alabama hadn't been on the shots for years when she learned of her brain tumors, which caused her years of anguish.Faulks told the British website that she'd begun taking the medication back in 1993, the year after it was approved by the FDA in the United States. She stopped taking it only a few years later, but spent decades having splitting headaches and experiencing dizziness and tremors. After being dismissed by no fewer than six doctors, the woman finally got an MRI last summer and learned that she had a brain tumor — and is now undergoing radiation to shrink it after all this time."I know this is something I'm going to have to live with for the rest of my life, as long as I live," Faulks told the Daily Mail.Currently, the class action case against Pfizer on behalf of women like Faulks and TC is in its earliest stages as attorneys representing those hundreds of women with brain tumors start working to make them whole.Even if they receive adequate payouts, however, that money won't take away their suffering, or give them back the years of their life lost to tumors they should have been warned about.Share This ArticleImage by Beata Zawrzel / NurPhoto via Getty / FuturismRead This Next
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • SpaceX Is Reportedly Giving Elon Musk Advance Warning of Drug Tests

    Image by Jim Watson / AFP via Getty / FuturismRx/MedicinesGenerally speaking, drug testing in the workplace is supposed to be conductd at random intervals — but according to insider sources, that's not the case for the sometimes-world's richest man.A New York Times exposé about Elon Musk's fear and loathing on the campaign trail found that the billionaire not only has been on boatloads of risky and illegal drugs during his turn into hard-right politics, but was also being tipped off about when he'd be tested for them.As we've long known, SpaceX's federal contractor status requires that all its employees — including its mercurial CEO — pass drug tests. Given Musk's admitted penchant for mind-altering substances, and for ketamine in particular, his ability to pass those tests has long been a concern.If the NYT's sources are to be believed, we may now know how the 53-year-old keeps passing: because he's been warned in advance when the "random" tests are going to occur, and been able to plan accordingly.As those same sources allege, Musk's substance use increased significantly as he helped propel Donald Trump to the White House for a second time. He purportedly told people that his bladder had been affected by his frequent ketamine use, and had been taking ecstasy and psilocybin mushrooms too.The multi-hyphenate businessman and politico also carried around a daily medication box with at least 20 pills in it — including ones with markings that resemble the ADHD drug Adderall, according to people who saw photos of it and regaled it back to the NYT. When it comes to stimulants like Adderall and anything else in Musk's daily pill box — which, despite how the article makes it sound, is not that abnormal a thing for a man in his 50s to be carrying around — there's a good chance that the billionaire has prescriptions that could excuse at least some abuse. He also has claimed that he was prescribed ketamine for depression, though to be fair, taking so much that it makes it hard to pee would suggest he's far surpassed his recommended dosage.As Futurism has noted before, Musk's drugs of choice described here are not often screened for on standard drug panels. Though we don't know how in-depth federal drug tests are, standard tests primarily screen for cocaine, cannabis, amphetamines, opiates, and PCP, though some include ecstasy/MDMA as well. Testing for ketamine is, on the other hand, pretty rare.If Musk is being tipped off about his drug tests — and is either flushing his system or taking a sober underling's urine or hair — none of that would matter. But given that the worst of his purported substance abuse revolves around ketamine, there's always a chance that he's in a recurring K-hole and getting off scot-free, unlike his employees, who are held to a much higher standard.More on Musk's drug use: Ex-FBI Agent: Elon Musk's Drug Habit Made Him an Easy Target for Russian SpiesShare This Article
    #spacex #reportedly #giving #elon #musk
    SpaceX Is Reportedly Giving Elon Musk Advance Warning of Drug Tests
    Image by Jim Watson / AFP via Getty / FuturismRx/MedicinesGenerally speaking, drug testing in the workplace is supposed to be conductd at random intervals — but according to insider sources, that's not the case for the sometimes-world's richest man.A New York Times exposé about Elon Musk's fear and loathing on the campaign trail found that the billionaire not only has been on boatloads of risky and illegal drugs during his turn into hard-right politics, but was also being tipped off about when he'd be tested for them.As we've long known, SpaceX's federal contractor status requires that all its employees — including its mercurial CEO — pass drug tests. Given Musk's admitted penchant for mind-altering substances, and for ketamine in particular, his ability to pass those tests has long been a concern.If the NYT's sources are to be believed, we may now know how the 53-year-old keeps passing: because he's been warned in advance when the "random" tests are going to occur, and been able to plan accordingly.As those same sources allege, Musk's substance use increased significantly as he helped propel Donald Trump to the White House for a second time. He purportedly told people that his bladder had been affected by his frequent ketamine use, and had been taking ecstasy and psilocybin mushrooms too.The multi-hyphenate businessman and politico also carried around a daily medication box with at least 20 pills in it — including ones with markings that resemble the ADHD drug Adderall, according to people who saw photos of it and regaled it back to the NYT. When it comes to stimulants like Adderall and anything else in Musk's daily pill box — which, despite how the article makes it sound, is not that abnormal a thing for a man in his 50s to be carrying around — there's a good chance that the billionaire has prescriptions that could excuse at least some abuse. He also has claimed that he was prescribed ketamine for depression, though to be fair, taking so much that it makes it hard to pee would suggest he's far surpassed his recommended dosage.As Futurism has noted before, Musk's drugs of choice described here are not often screened for on standard drug panels. Though we don't know how in-depth federal drug tests are, standard tests primarily screen for cocaine, cannabis, amphetamines, opiates, and PCP, though some include ecstasy/MDMA as well. Testing for ketamine is, on the other hand, pretty rare.If Musk is being tipped off about his drug tests — and is either flushing his system or taking a sober underling's urine or hair — none of that would matter. But given that the worst of his purported substance abuse revolves around ketamine, there's always a chance that he's in a recurring K-hole and getting off scot-free, unlike his employees, who are held to a much higher standard.More on Musk's drug use: Ex-FBI Agent: Elon Musk's Drug Habit Made Him an Easy Target for Russian SpiesShare This Article #spacex #reportedly #giving #elon #musk
    FUTURISM.COM
    SpaceX Is Reportedly Giving Elon Musk Advance Warning of Drug Tests
    Image by Jim Watson / AFP via Getty / FuturismRx/MedicinesGenerally speaking, drug testing in the workplace is supposed to be conductd at random intervals — but according to insider sources, that's not the case for the sometimes-world's richest man.A New York Times exposé about Elon Musk's fear and loathing on the campaign trail found that the billionaire not only has been on boatloads of risky and illegal drugs during his turn into hard-right politics, but was also being tipped off about when he'd be tested for them.As we've long known, SpaceX's federal contractor status requires that all its employees — including its mercurial CEO — pass drug tests. Given Musk's admitted penchant for mind-altering substances, and for ketamine in particular, his ability to pass those tests has long been a concern.If the NYT's sources are to be believed, we may now know how the 53-year-old keeps passing: because he's been warned in advance when the "random" tests are going to occur, and been able to plan accordingly.(Though those sources didn't get into it, anyone who's ever had to pass a drug test themselves knows that there are typicaly two options: drink so much water that you pee all the drugs out of your system, or get urine or hair from someone else and pass it off as your own.)As those same sources allege, Musk's substance use increased significantly as he helped propel Donald Trump to the White House for a second time. He purportedly told people that his bladder had been affected by his frequent ketamine use, and had been taking ecstasy and psilocybin mushrooms too.The multi-hyphenate businessman and politico also carried around a daily medication box with at least 20 pills in it — including ones with markings that resemble the ADHD drug Adderall, according to people who saw photos of it and regaled it back to the NYT. (He's also been linked to cocaine and a cornucopia of other substances.)When it comes to stimulants like Adderall and anything else in Musk's daily pill box — which, despite how the article makes it sound, is not that abnormal a thing for a man in his 50s to be carrying around — there's a good chance that the billionaire has prescriptions that could excuse at least some abuse. He also has claimed that he was prescribed ketamine for depression, though to be fair, taking so much that it makes it hard to pee would suggest he's far surpassed his recommended dosage.As Futurism has noted before, Musk's drugs of choice described here are not often screened for on standard drug panels. Though we don't know how in-depth federal drug tests are, standard tests primarily screen for cocaine, cannabis, amphetamines, opiates, and PCP, though some include ecstasy/MDMA as well. Testing for ketamine is, on the other hand, pretty rare.If Musk is being tipped off about his drug tests — and is either flushing his system or taking a sober underling's urine or hair — none of that would matter. But given that the worst of his purported substance abuse revolves around ketamine, there's always a chance that he's in a recurring K-hole and getting off scot-free, unlike his employees, who are held to a much higher standard.More on Musk's drug use: Ex-FBI Agent: Elon Musk's Drug Habit Made Him an Easy Target for Russian SpiesShare This Article
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Taking common medicines might matter for cancer treatment

    Nature, Published online: 27 May 2025; doi:10.1038/d41586-025-01642-7Taking common medicines might matter for cancer treatment
    #taking #common #medicines #might #matter
    Taking common medicines might matter for cancer treatment
    Nature, Published online: 27 May 2025; doi:10.1038/d41586-025-01642-7Taking common medicines might matter for cancer treatment #taking #common #medicines #might #matter
    WWW.NATURE.COM
    Taking common medicines might matter for cancer treatment
    Nature, Published online: 27 May 2025; doi:10.1038/d41586-025-01642-7Taking common medicines might matter for cancer treatment
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Onimusha 2: Samurai's Destiny Remastered |OT| Reclaim Your Destiny

    Lucia
    Member

    Oct 18, 2021

    2,437

    Argentina

    Developer: Capcom, NeoBards EntertainmentPublisher: Capcom
    Release date: May 23, 2025
    Platform: PlayStation 4, Xbox One, Nintendo Switch, PC
    Genre: Action-adventure
    Price:, €29.99, £24.99Store links:
    System Requirements​

    Minimum

    OS: Windows 10, Windows 11
    Processor: Intel Core i3 8350K, AMD Ryzen 3 3200G
    Memory: 8 GB
    Graphic card: NVIDIA® GeForce® GTX 960or AMD Radeon™ RX560DirectX: 12
    Hard drive space: 25 GB

    Recommended

    OS: Windows 10, Windows 11
    Processor: Intel Core i3 8350K, AMD Ryzen 3 3200G
    Memory: 16 GB
    Graphic card: NVIDIA® GeForce® GTX 1060or AMD Radeon™ RX570DirectX: 12
    Hard drive space: 25 GB

    Click to expand...
    Click to shrink...

    Troubleshooting guide & Issue reporting:

    Onimusha 2: Samurai's Destiny :: Steam Community

    steamcommunity.com

    About the Game​Onimusha 2: Samurai's Destiny was originally released on the PlayStation 2. Although it's a sequel to Onimusha: Warlords, the game features a complete new protagonist and supporting cast and the game can be enjoyed without prior experience of the first game.The game improves on various aspects of the original Onimusha: Warlords, increasing the action and replay value thanks to featuring 4 additional playable characters and a branching story. The remaster updates the game to HD format and brings various quality of life changes and extra features.

    Story & Cast

    The game tells the story of Jubei Yagyu and his revenge journey againts Oda Nobunaga and his demonic Genma army for the massacre of his clan. During his journey Jubei will meet and cross path with the mysterious woman Oyu, the young ninja Kotaro, the master spearwielder Ekei and the gunslinger Magoichi, they each have their own aims and their own connections that will lead them to fight each other, and sometimes fight together. Experience 100 different scenarios across the game's branching story.

    Completionist note: it's imposible to see all scenarios in one playthrough, for more details click here.

    Gameplay

    Like in the original Onimusha: Warlords, the game features a mix of exploration and combat but now to a greater degree. The player fights using a normal sword but as they progress through the story they will collect an assortment of short and long range weapons, from diverse element-based weapons to bows and firearms.

    Defeated Genma monsters will provide the player with demon souls that they can absorb to obtain various benefits depending on their color. Yellow souls will restore your health, blue souls restore magic power, red souls can be used to upgrade your gear and the rare purple souls can be used to unleash your 'Onimusha' transformation after absorbing five of them.

    The player can build and deepen Jubei's relationship with each of his allies by performing certain actions and exchangin gifts of their liking with them, this will unlock special scenarios and eventually giving you control to play as them during certain points of the story.

    For more details about the Gift Exchange system, click here.

    New Features & updates​
    New "HELL" Mode : an extremely difficult mode where you die in a single hit.
    Gallery: the gallery from the original now supports higher resolution & zoom functions.

    Over 100 new special artworks have been added.
    You can listen all 43 songs of the original soundtrack.

    All assets updated to high definition
    Switch between 16:9 and 4:3 aspect ratio on the fly during gameplay.
    Easy Mode is now available by default.
    All cutscenes can now be skipped from the start.
    Mini-games available from the start.
    Alternative costumes available from the start
    Added auto-save feature
    Weapons can be swapped without having to open the menu.
    Bonuses

    ​You can get a special outfit for Jubei if you have save data from Onimusha: Warlords. To switch Jubei's outfit select Special Features → Jubei's Outfit and select between Normal and Special from the title-screen menu. This will only alter the appearance. Your status will be the same as the armour you equip in-game.

    By pre-ordering the game you get the Onimusha 2: Orchestra Album Selection Pack. It includes five tracks selected from the Onimusha 2 Orchestra Album Taro Iwashiro Selection. Select Special Features → Gallery → Original Soundtrack to access these tracks from the title-screen menu. This product is also available as part of the Onimusha bundle., to receive a limited-time bonus!)

    You also get a pack of items that contains 3 herbs, 2 medicines, 1 secret medicine, 2 special magic liquid, 1 perfect medicine, 1 talisman and 10,000 red souls. The content will appear after meeting Takajo in the early game. If you have already met Takajo, the content will appear when you select "Load Game". While you can only get this item pack once, you can also get the items in-game. The content listed in the DLC may become available separately at a later date.

    Bundle

    ​You can purchase Onimusha: Warlords and Onimusha 2: SamuraI's Destiny together. Bundle links:
    Media​

    Announcement Trailer​Pre-order Announcement Trailer​



    Message from the Director​Gameplay with the Director​

     

    Last edited: Yesterday at 8:21 AM

    Threadmarks Gift Exchange guide
    New

    Index

    OP

    OP

    Lucia
    Member

    Oct 18, 2021

    2,437

    Argentina

    Gift Exchange​

    A core gameplay mechanique introduced in Onimusha 2 is the Gift Exchange.

    Alongside the player's standard item inventory, there exists a separate inventory exclusive for gift items that can be given to Ekei, Magoichi , Kotaro and Oyu. A total of 125 gifts can be found throughout the game, and each will elicit a different response depending on who it is given to.

    All 125 Gift locations.

    View:

    As said above, gifts will elicit different response to each character depending on how much they value it, for example the Vodka gift will have an A-rank value for Ekei but a B-rank value for Magoichi. As detailed in the video above, each character has a pool of unique gifts/items per rank that they can give you at random in exchange for a gift of that rank. The video and doc below details what rank value each gift has per character.

    Doc with each gift rating value:

    View:  

    Last edited: Yesterday at 9:36 AM

    New

    Index

    Threadmarks Scenario Route guide
    New

    Index

    OP

    OP

    Lucia
    Member

    Oct 18, 2021

    2,437

    Argentina

    Scenario Route

    From the Onimusha wiki:

    While there are many scenarios that are guaranteed to occur throughout the game, many other optional scenarios can be triggered by raising the friendship of one or more characters by repeatedly giving them gifts that elicit positive reactions. These optional scenarios can provide additional character development of a certain sub-character, reward the player with additional items, and can unlock playable sections for those characters, though the playable section for Oyu is mandatory regardless of her friendship. While some optional scenarios can occur on their own, others are a part of a split route, with only one out of multiple scenarios being possible to trigger per-playthrough.

    However, there are restrictions to this system. Due to the split scenario routes, it is not possible to trigger all scenarios in a single playthrough as there are multiple instances of split scenario routes that can only trigger a single scenario, with it even being possible for none of them to trigger in one case. Another restriction is that even if the friendship level of all four sub-characters is at the minimum level required to trigger their optional scenarios, only one sub-character can have most of their optional scenarios triggered per playthrough, this depending on which sub-character has the highest friendship. The only exceptions are each sub-character's playable sections and some scenarios that also involve whoever has the highest friendship. As a result of these restriction, at least four separate playthroughs are required to trigger every scenario in the game.

    --------------------------------

    Note: the Scenario Route keeps track of all the scenarios you triggered in previous playthroughs so you can just focus on the ones you missed, you still have to meet their requirements to trigger them in your subsequent plays.

    The following guides contain spoilers, recommend to read after your first playthrough or for returning old players.
     

    New

    Index

    shadowman16
    Member

    Oct 25, 2017

    41,569

    Magoichi you swine!.

    Very excited to replay this one, it was always one of my absolute favourites in the series... Half because of Gorgandatesand half because I felt legit robbed when you never got to defeat Nobunaga in Oni1. 

    KyouG
    Member

    Oct 26, 2017

    642

    I loved Onimusha HD, and I have been greatly looking forward to playing this. Will make use of the gift guide on my second playthrough, lol.
     

    Tengrave
    Avenger

    Oct 26, 2017

    1,108

    Great OT! The best Onimusha.
     

    ramenline
    Member

    Jan 9, 2019

    1,673

    Started playing the PS2 version yesterday, I played Oni 1 a few months ago and enjoyed it overall. Nice and breezy with great backgrounds.

    Will probably save 3 and 4 for when we're closer to Way of the Sword dropping 

    Aeana
    Member

    Oct 25, 2017

    7,573

    I love this game so much. Super excited.
     

    Sumio Mondo
    Member

    Oct 25, 2017

    10,746

    United Kingdom

    A PS2 classic returns!

    Can't wait to play it this weekend. 

    Western Yokai
    Member

    Feb 14, 2025

    172

    This will not get a physical release, right?
     

    RayCharlizard
    Member

    Nov 2, 2017

    4,475

    Western Yokai said:

    This will not get a physical release, right?

    Click to expand...
    Click to shrink...

    There isn't one announced but who knows if this gets a Limited Run or something down the line.
     

    AlexDS1996
    Member

    Jul 14, 2022

    3,958

    Excellent thread! Looking forward to playing it at midnight.
     

    demi
    Member

    Oct 27, 2017

    16,574

    My name is Goooogandantessss
     

    Sumio Mondo
    Member

    Oct 25, 2017

    10,746

    United Kingdom

    Tengrave said:

    Great OT! The best Onimusha.

    Click to expand...
    Click to shrink...

     

    Chackan
    Member

    Oct 31, 2017

    5,451

    "Juuuubeeeeeeiiii"

    Fucking finally. Played Onimusha 1 HD when it came out on the Switch, and have been waiting since then for this one!

    Hope they don't take another 5 or 6 years with Onimusha 3... 

    ResinPeasant93
    Member

    Apr 24, 2024

    2,489

    My favorite Onimusha. Still have my PS2 copy
     

    Koivusilta
    Member

    Oct 30, 2017

    629

    Finland

    The best Onimusha and one of my overall favorite PS2 games, so glad it's finally getting a re-release! Can't wait to dig in tomorrow after work. Completed Clair Obscur just in time, too!

    Looking at the Motohide Eshiro gameplay video, I'm glad to see they changed the Onimusha transformation so that it's now manually activated like in Onimusha 3, so you don't waste your transformation if you accidentally collect the fifth purple orb. Attack charging is also a bit different now, since the game originally used the pressure sensitive shoulder buttons for it.

    PS. I really wish they go back and add Genma features into the Warlords remaster, even if it was paid DLC. 

    G_Shumi
    One Winged Slayer
    Member

    Oct 26, 2017

    7,650

    Cleveland, OH

    Great OP!

    I recently played Onimusha 2 & 3 on PS2 last year, so I'll probably wait for a sale.

    But I do have one sage advice for Onimusha 2: rotate the analog sticks in order to open the heavy door! If you get far enough in the game, you'll know what I mean. 

    Tagovailoa
    Member

    Feb 5, 2023

    1,586

    Love this game!

    Just beat Oni 1 remastered in one sitting yesterday while home sick from work. Looking forward to getting to this sometime this weekend.

    I have beaten this game 5+ times and never got 100% scenario completion. 

    RiZ IV
    Member

    Oct 27, 2017

    933

    Wow, I didn't realize this was coming out tomorrow. Onimusha 2 was one of my favorite PS2 games. Will definitely pick this up.
     

    GwyndolinCinder
    Member

    Oct 26, 2017

    5,703

    JUBEIIIIIIIIIIII
     

    coldsagging
    AVALANCHE
    Member

    Oct 27, 2017

    8,077

    Tengrave said:

    Great OT! The best Onimusha.

    Click to expand...
    Click to shrink...

    Facts.
     

    The Silver
    Member

    Oct 28, 2017

    11,584

    Haven't replayed this in so long. Hope the bring back and expand on the structure of Oni 2 in the new one, it has a lot of potential
     

    Annie85x
    Member

    Mar 12, 2020

    2,949

    Oni 2 was my fav. Super excited to jump back in over the weekend
     

    Timodus
    Member

    Oct 27, 2017

    383

    My first and favorite Onimusha. I'm glad I can finally play it with the Japanese voices.
     

    OP

    OP

    Lucia
    Member

    Oct 18, 2021

    2,437

    Argentina



    @OnimushaGame said:

    Onimusha 2: Samurai's Destiny launches tomorrow. Prepare to reclaim your destiny! Today, we're celebrating with this amazing piece from @hieumayart featuring our protagonist, Jubei!

    Click to expand...
    Click to shrink...

     

    thetrin
    Member

    Oct 26, 2017

    10,725

    Grand Junction, CO

    Awesome game. Loved it when I played it on PS2. I am curious to see what people who are playing it with fresh eyes think of it.
     

    stn
    Member

    Oct 28, 2017

    6,414

    Definitely getting this! I started playing the OG on PS2, but the controls are so bad that I'll play this instead.
     

    OP

    OP

    Lucia
    Member

    Oct 18, 2021

    2,437

    Argentina



    @OnimushaGame said:

    The web manual for Onimusha 2: Samurai's Destiny is now live. Check it out to prepare for tomorrow's release! Access the manual here

    /

    Click to expand...
    Click to shrink...

     

    Zor
    Member

    Oct 30, 2017

    14,095

    So I was going to replay the first game before this as I own the remaster, but I just realised I own Genma Onimusha and never ever actually played it.

    Is Genma considered the best version just for people that like a more difficult experience or do its benefits/improvements range beyond that?

    Just wondering which the best version of the first is. 

    LetalisAmare
    Member

    Oct 27, 2017

    4,363

    Just started. The 16:9 is zoomed in or cropped whatever you call it. I'll stick to 4:3.
     

    OP

    OP

    Lucia
    Member

    Oct 18, 2021

    2,437

    Argentina

    Zor said:

    So I was going to replay the first game before this as I own the remaster, but I just realised I own Genma Onimusha and never ever actually played it.

    Is Genma considered the best version just for people that like a more difficult experience or do its benefits/improvements range beyond that?

    Just wondering which the best version of the first is.
    Click to expand...
    Click to shrink...

    Yeah, Genma is the best version of Oni 1 and it's an overall harder game than the OG, it has one new location, 2 new bosses.
     

    Count of Monte Sawed-Off
    Member

    Oct 27, 2017

    5,057

    Best Onimusha.
     

    Zetta
    The Fallen

    Oct 25, 2017

    8,521

    Buying it just to show support and will eventually play it much later on. Hoping this sells a lot so we can get 3.
     

    Jawmuncher
    Crisis Dino
    Moderator

    Oct 25, 2017

    44,845

    Ibis Island

    Great OT, fixed the title though. No need to include the platforms in the title since they're in the OP
     

    giancarlo123x
    One Winged Slayer
    Member

    Oct 25, 2017

    28,013

    ? That's easy money.
     

    TΛPIVVΛ
    Member

    Nov 12, 2017

    4,125

    Surprised its out!

    Just crept up on me!

    View:  

    Type VII
    Member

    Oct 31, 2017

    2,980

    Downloaded on PS5 and ready to go when I get home from work this evening. It's a shame there's no physical release, but between this and Capcom Fighting Collection 2, I'll be partying like it's the early 2000s all weekend.
     

    Aske
    The Fallen

    Oct 25, 2017

    6,318

    Canadia

    Golden Evil Statue!!!!!!
     

    AlexDS1996
    Member

    Jul 14, 2022

    3,958

    I've just played a little over an hour and it's perfect. That counter attack is always satisfying. The game looks great to me and the sound is really nice too.
     

    Tagovailoa
    Member

    Feb 5, 2023

    1,586

    Aske said:

    Golden Evil Statue!!!!!!

    Click to expand...
    Click to shrink...

    New players are not going to have a good time 

    Zolbrod
    Member

    Oct 27, 2017

    3,965

    Osaka, Japan

    By far the best game in the series!

    Can't wait to play it again! 

    NovumVeritas
    Member

    Oct 26, 2017

    11,143

    Berlin

    I just played a little bit docked on Switch, this looks very oversharpend, any one else? Is that the use of the AI filter they used?
     

    Hystzen
    Member

    Oct 25, 2017

    2,674

    Manchester UK

    It's best onimusha for a 1/3rd of game then they ditch the hub concept and character interactions it turns rushed and bland
     

    OP

    OP

    Lucia
    Member

    Oct 18, 2021

    2,437

    Argentina

    I wish the 4:3 ratio also applied to cutscenes.
     

    Pez
    Member

    Oct 28, 2017

    1,422

    If this gets a physical release, I'm there. Will hold out until then.
     

    joyfoolish
    Member

    Aug 25, 2024

    197

    I was wondering if the PS4 version looks good on PS5? Is it at least 1440p?
     

    Rust
    Member

    Jan 24, 2018

    1,443

    What the heck is this stupid random mini-game?

    I think I've died more often opening a garage door than throughout the rest of the game.

    I really enjoyed the first one - samurai game ala Resident Evil? Sign me up! Whereas this one started okay, now it's turned into an incredibly linear experience.

    I'm hoping it'll change back, but I'm thinking it's entering the final act. 

    Jawmuncher
    Crisis Dino
    Moderator

    Oct 25, 2017

    44,845

    Ibis Island

    Pez said:

    If this gets a physical release, I'm there. Will hold out until then.

    Click to expand...
    Click to shrink...

    No Physical release is a big hit on this. Especially after they did the 1st game.
    Not even a Japanese Physical is surprising.
    Capcom was one of the stronger JP publishers still doing that at least, so it's a shame to see them seemingly ditching it. 

    Pez
    Member

    Oct 28, 2017

    1,422

    Yeah, they never did them for the DMC games on Switch either. There's a good chance this never gets a physical release. We'll see!
     
    #onimusha #samurai039s #destiny #remastered #reclaim
    Onimusha 2: Samurai's Destiny Remastered |OT| Reclaim Your Destiny
    Lucia Member Oct 18, 2021 2,437 Argentina Developer: Capcom, NeoBards EntertainmentPublisher: Capcom Release date: May 23, 2025 Platform: PlayStation 4, Xbox One, Nintendo Switch, PC Genre: Action-adventure Price:, €29.99, £24.99Store links: System Requirements​ Minimum OS: Windows 10, Windows 11 Processor: Intel Core i3 8350K, AMD Ryzen 3 3200G Memory: 8 GB Graphic card: NVIDIA® GeForce® GTX 960or AMD Radeon™ RX560DirectX: 12 Hard drive space: 25 GB Recommended OS: Windows 10, Windows 11 Processor: Intel Core i3 8350K, AMD Ryzen 3 3200G Memory: 16 GB Graphic card: NVIDIA® GeForce® GTX 1060or AMD Radeon™ RX570DirectX: 12 Hard drive space: 25 GB Click to expand... Click to shrink... Troubleshooting guide & Issue reporting: Onimusha 2: Samurai's Destiny :: Steam Community steamcommunity.com About the Game​Onimusha 2: Samurai's Destiny was originally released on the PlayStation 2. Although it's a sequel to Onimusha: Warlords, the game features a complete new protagonist and supporting cast and the game can be enjoyed without prior experience of the first game.The game improves on various aspects of the original Onimusha: Warlords, increasing the action and replay value thanks to featuring 4 additional playable characters and a branching story. The remaster updates the game to HD format and brings various quality of life changes and extra features. Story & Cast The game tells the story of Jubei Yagyu and his revenge journey againts Oda Nobunaga and his demonic Genma army for the massacre of his clan. During his journey Jubei will meet and cross path with the mysterious woman Oyu, the young ninja Kotaro, the master spearwielder Ekei and the gunslinger Magoichi, they each have their own aims and their own connections that will lead them to fight each other, and sometimes fight together. Experience 100 different scenarios across the game's branching story. Completionist note: it's imposible to see all scenarios in one playthrough, for more details click here. Gameplay Like in the original Onimusha: Warlords, the game features a mix of exploration and combat but now to a greater degree. The player fights using a normal sword but as they progress through the story they will collect an assortment of short and long range weapons, from diverse element-based weapons to bows and firearms. Defeated Genma monsters will provide the player with demon souls that they can absorb to obtain various benefits depending on their color. Yellow souls will restore your health, blue souls restore magic power, red souls can be used to upgrade your gear and the rare purple souls can be used to unleash your 'Onimusha' transformation after absorbing five of them. The player can build and deepen Jubei's relationship with each of his allies by performing certain actions and exchangin gifts of their liking with them, this will unlock special scenarios and eventually giving you control to play as them during certain points of the story. For more details about the Gift Exchange system, click here. New Features & updates​ New "HELL" Mode : an extremely difficult mode where you die in a single hit. Gallery: the gallery from the original now supports higher resolution & zoom functions. Over 100 new special artworks have been added. You can listen all 43 songs of the original soundtrack. All assets updated to high definition Switch between 16:9 and 4:3 aspect ratio on the fly during gameplay. Easy Mode is now available by default. All cutscenes can now be skipped from the start. Mini-games available from the start. Alternative costumes available from the start Added auto-save feature Weapons can be swapped without having to open the menu. Bonuses ​You can get a special outfit for Jubei if you have save data from Onimusha: Warlords. To switch Jubei's outfit select Special Features → Jubei's Outfit and select between Normal and Special from the title-screen menu. This will only alter the appearance. Your status will be the same as the armour you equip in-game. By pre-ordering the game you get the Onimusha 2: Orchestra Album Selection Pack. It includes five tracks selected from the Onimusha 2 Orchestra Album Taro Iwashiro Selection. Select Special Features → Gallery → Original Soundtrack to access these tracks from the title-screen menu. This product is also available as part of the Onimusha bundle., to receive a limited-time bonus!) You also get a pack of items that contains 3 herbs, 2 medicines, 1 secret medicine, 2 special magic liquid, 1 perfect medicine, 1 talisman and 10,000 red souls. The content will appear after meeting Takajo in the early game. If you have already met Takajo, the content will appear when you select "Load Game". While you can only get this item pack once, you can also get the items in-game. The content listed in the DLC may become available separately at a later date. Bundle ​You can purchase Onimusha: Warlords and Onimusha 2: SamuraI's Destiny together. Bundle links: Media​ Announcement Trailer​Pre-order Announcement Trailer​ ​ Message from the Director​Gameplay with the Director​   Last edited: Yesterday at 8:21 AM Threadmarks Gift Exchange guide New Index OP OP Lucia Member Oct 18, 2021 2,437 Argentina Gift Exchange​ A core gameplay mechanique introduced in Onimusha 2 is the Gift Exchange. Alongside the player's standard item inventory, there exists a separate inventory exclusive for gift items that can be given to Ekei, Magoichi , Kotaro and Oyu. A total of 125 gifts can be found throughout the game, and each will elicit a different response depending on who it is given to. All 125 Gift locations. View: As said above, gifts will elicit different response to each character depending on how much they value it, for example the Vodka gift will have an A-rank value for Ekei but a B-rank value for Magoichi. As detailed in the video above, each character has a pool of unique gifts/items per rank that they can give you at random in exchange for a gift of that rank. The video and doc below details what rank value each gift has per character. Doc with each gift rating value: View:   Last edited: Yesterday at 9:36 AM New Index Threadmarks Scenario Route guide New Index OP OP Lucia Member Oct 18, 2021 2,437 Argentina Scenario Route From the Onimusha wiki: While there are many scenarios that are guaranteed to occur throughout the game, many other optional scenarios can be triggered by raising the friendship of one or more characters by repeatedly giving them gifts that elicit positive reactions. These optional scenarios can provide additional character development of a certain sub-character, reward the player with additional items, and can unlock playable sections for those characters, though the playable section for Oyu is mandatory regardless of her friendship. While some optional scenarios can occur on their own, others are a part of a split route, with only one out of multiple scenarios being possible to trigger per-playthrough. However, there are restrictions to this system. Due to the split scenario routes, it is not possible to trigger all scenarios in a single playthrough as there are multiple instances of split scenario routes that can only trigger a single scenario, with it even being possible for none of them to trigger in one case. Another restriction is that even if the friendship level of all four sub-characters is at the minimum level required to trigger their optional scenarios, only one sub-character can have most of their optional scenarios triggered per playthrough, this depending on which sub-character has the highest friendship. The only exceptions are each sub-character's playable sections and some scenarios that also involve whoever has the highest friendship. As a result of these restriction, at least four separate playthroughs are required to trigger every scenario in the game. -------------------------------- Note: the Scenario Route keeps track of all the scenarios you triggered in previous playthroughs so you can just focus on the ones you missed, you still have to meet their requirements to trigger them in your subsequent plays. The following guides contain spoilers, recommend to read after your first playthrough or for returning old players.   New Index shadowman16 Member Oct 25, 2017 41,569 Magoichi you swine!. Very excited to replay this one, it was always one of my absolute favourites in the series... Half because of Gorgandatesand half because I felt legit robbed when you never got to defeat Nobunaga in Oni1.  KyouG Member Oct 26, 2017 642 I loved Onimusha HD, and I have been greatly looking forward to playing this. Will make use of the gift guide on my second playthrough, lol.   Tengrave Avenger Oct 26, 2017 1,108 Great OT! The best Onimusha.   ramenline Member Jan 9, 2019 1,673 Started playing the PS2 version yesterday, I played Oni 1 a few months ago and enjoyed it overall. Nice and breezy with great backgrounds. Will probably save 3 and 4 for when we're closer to Way of the Sword dropping  Aeana Member Oct 25, 2017 7,573 I love this game so much. Super excited.   Sumio Mondo Member Oct 25, 2017 10,746 United Kingdom A PS2 classic returns! Can't wait to play it this weekend.  Western Yokai Member Feb 14, 2025 172 This will not get a physical release, right?   RayCharlizard Member Nov 2, 2017 4,475 Western Yokai said: This will not get a physical release, right? Click to expand... Click to shrink... There isn't one announced but who knows if this gets a Limited Run or something down the line.   AlexDS1996 Member Jul 14, 2022 3,958 Excellent thread! Looking forward to playing it at midnight.   demi Member Oct 27, 2017 16,574 My name is Goooogandantessss   Sumio Mondo Member Oct 25, 2017 10,746 United Kingdom Tengrave said: Great OT! The best Onimusha. Click to expand... Click to shrink...   Chackan Member Oct 31, 2017 5,451 "Juuuubeeeeeeiiii" Fucking finally. Played Onimusha 1 HD when it came out on the Switch, and have been waiting since then for this one! Hope they don't take another 5 or 6 years with Onimusha 3...  ResinPeasant93 Member Apr 24, 2024 2,489 My favorite Onimusha. Still have my PS2 copy   Koivusilta Member Oct 30, 2017 629 Finland The best Onimusha and one of my overall favorite PS2 games, so glad it's finally getting a re-release! Can't wait to dig in tomorrow after work. Completed Clair Obscur just in time, too! Looking at the Motohide Eshiro gameplay video, I'm glad to see they changed the Onimusha transformation so that it's now manually activated like in Onimusha 3, so you don't waste your transformation if you accidentally collect the fifth purple orb. Attack charging is also a bit different now, since the game originally used the pressure sensitive shoulder buttons for it. PS. I really wish they go back and add Genma features into the Warlords remaster, even if it was paid DLC.  G_Shumi One Winged Slayer Member Oct 26, 2017 7,650 Cleveland, OH Great OP! I recently played Onimusha 2 & 3 on PS2 last year, so I'll probably wait for a sale. But I do have one sage advice for Onimusha 2: rotate the analog sticks in order to open the heavy door! If you get far enough in the game, you'll know what I mean.  Tagovailoa Member Feb 5, 2023 1,586 Love this game! Just beat Oni 1 remastered in one sitting yesterday while home sick from work. Looking forward to getting to this sometime this weekend. I have beaten this game 5+ times and never got 100% scenario completion.  RiZ IV Member Oct 27, 2017 933 Wow, I didn't realize this was coming out tomorrow. Onimusha 2 was one of my favorite PS2 games. Will definitely pick this up.   GwyndolinCinder Member Oct 26, 2017 5,703 JUBEIIIIIIIIIIII   coldsagging AVALANCHE Member Oct 27, 2017 8,077 Tengrave said: Great OT! The best Onimusha. Click to expand... Click to shrink... Facts.   The Silver Member Oct 28, 2017 11,584 Haven't replayed this in so long. Hope the bring back and expand on the structure of Oni 2 in the new one, it has a lot of potential   Annie85x Member Mar 12, 2020 2,949 Oni 2 was my fav. Super excited to jump back in over the weekend 😍   Timodus Member Oct 27, 2017 383 My first and favorite Onimusha. I'm glad I can finally play it with the Japanese voices.   OP OP Lucia Member Oct 18, 2021 2,437 Argentina @OnimushaGame said: Onimusha 2: Samurai's Destiny launches tomorrow. Prepare to reclaim your destiny! Today, we're celebrating with this amazing piece from @hieumayart featuring our protagonist, Jubei! Click to expand... Click to shrink...   thetrin Member Oct 26, 2017 10,725 Grand Junction, CO Awesome game. Loved it when I played it on PS2. I am curious to see what people who are playing it with fresh eyes think of it.   stn Member Oct 28, 2017 6,414 Definitely getting this! I started playing the OG on PS2, but the controls are so bad that I'll play this instead.   OP OP Lucia Member Oct 18, 2021 2,437 Argentina @OnimushaGame said: The web manual for Onimusha 2: Samurai's Destiny is now live. Check it out to prepare for tomorrow's release! Access the manual here 👇 / Click to expand... Click to shrink...   Zor Member Oct 30, 2017 14,095 So I was going to replay the first game before this as I own the remaster, but I just realised I own Genma Onimusha and never ever actually played it. Is Genma considered the best version just for people that like a more difficult experience or do its benefits/improvements range beyond that? Just wondering which the best version of the first is.  LetalisAmare Member Oct 27, 2017 4,363 Just started. The 16:9 is zoomed in or cropped whatever you call it. I'll stick to 4:3.   OP OP Lucia Member Oct 18, 2021 2,437 Argentina Zor said: So I was going to replay the first game before this as I own the remaster, but I just realised I own Genma Onimusha and never ever actually played it. Is Genma considered the best version just for people that like a more difficult experience or do its benefits/improvements range beyond that? Just wondering which the best version of the first is. Click to expand... Click to shrink... Yeah, Genma is the best version of Oni 1 and it's an overall harder game than the OG, it has one new location, 2 new bosses.   Count of Monte Sawed-Off Member Oct 27, 2017 5,057 Best Onimusha.   Zetta The Fallen Oct 25, 2017 8,521 Buying it just to show support and will eventually play it much later on. Hoping this sells a lot so we can get 3.   Jawmuncher Crisis Dino Moderator Oct 25, 2017 44,845 Ibis Island Great OT, fixed the title though. No need to include the platforms in the title since they're in the OP   giancarlo123x One Winged Slayer Member Oct 25, 2017 28,013 ? That's easy money.   TΛPIVVΛ Member Nov 12, 2017 4,125 Surprised its out! Just crept up on me! View:   Type VII Member Oct 31, 2017 2,980 Downloaded on PS5 and ready to go when I get home from work this evening. It's a shame there's no physical release, but between this and Capcom Fighting Collection 2, I'll be partying like it's the early 2000s all weekend.   Aske The Fallen Oct 25, 2017 6,318 Canadia Golden Evil Statue!!!!!!   AlexDS1996 Member Jul 14, 2022 3,958 I've just played a little over an hour and it's perfect. That counter attack is always satisfying. The game looks great to me and the sound is really nice too.   Tagovailoa Member Feb 5, 2023 1,586 Aske said: Golden Evil Statue!!!!!! Click to expand... Click to shrink... New players are not going to have a good time  Zolbrod Member Oct 27, 2017 3,965 Osaka, Japan By far the best game in the series! Can't wait to play it again!  NovumVeritas Member Oct 26, 2017 11,143 Berlin I just played a little bit docked on Switch, this looks very oversharpend, any one else? Is that the use of the AI filter they used?   Hystzen Member Oct 25, 2017 2,674 Manchester UK It's best onimusha for a 1/3rd of game then they ditch the hub concept and character interactions it turns rushed and bland   OP OP Lucia Member Oct 18, 2021 2,437 Argentina I wish the 4:3 ratio also applied to cutscenes.   Pez Member Oct 28, 2017 1,422 If this gets a physical release, I'm there. Will hold out until then.   joyfoolish Member Aug 25, 2024 197 I was wondering if the PS4 version looks good on PS5? Is it at least 1440p?   Rust Member Jan 24, 2018 1,443 What the heck is this stupid random mini-game? I think I've died more often opening a garage door than throughout the rest of the game. I really enjoyed the first one - samurai game ala Resident Evil? Sign me up! Whereas this one started okay, now it's turned into an incredibly linear experience. I'm hoping it'll change back, but I'm thinking it's entering the final act.  Jawmuncher Crisis Dino Moderator Oct 25, 2017 44,845 Ibis Island Pez said: If this gets a physical release, I'm there. Will hold out until then. Click to expand... Click to shrink... No Physical release is a big hit on this. Especially after they did the 1st game. Not even a Japanese Physical is surprising. Capcom was one of the stronger JP publishers still doing that at least, so it's a shame to see them seemingly ditching it.  Pez Member Oct 28, 2017 1,422 Yeah, they never did them for the DMC games on Switch either. There's a good chance this never gets a physical release. We'll see!   #onimusha #samurai039s #destiny #remastered #reclaim
    WWW.RESETERA.COM
    Onimusha 2: Samurai's Destiny Remastered |OT| Reclaim Your Destiny
    Lucia Member Oct 18, 2021 2,437 Argentina Developer: Capcom (original), NeoBards Entertainment (remaster) Publisher: Capcom Release date: May 23, 2025 Platform(s): PlayStation 4, Xbox One, Nintendo Switch, PC Genre: Action-adventure Price: $29.99 (US), €29.99 (EU), £24.99 (UK) Store links: System Requirements (PC)​ Minimum OS: Windows 10 (64-bit), Windows 11 Processor: Intel Core i3 8350K, AMD Ryzen 3 3200G Memory: 8 GB Graphic card: NVIDIA® GeForce® GTX 960 (VRAM4GB) or AMD Radeon™ RX560 (VRAM4GB) DirectX: 12 Hard drive space: 25 GB Recommended OS: Windows 10 (64-bit), Windows 11 Processor: Intel Core i3 8350K, AMD Ryzen 3 3200G Memory: 16 GB Graphic card: NVIDIA® GeForce® GTX 1060 (VRAM6GB) or AMD Radeon™ RX570 (VRAM4GB) DirectX: 12 Hard drive space: 25 GB Click to expand... Click to shrink... Troubleshooting guide & Issue reporting (Steam): Onimusha 2: Samurai's Destiny :: Steam Community steamcommunity.com About the Game​Onimusha 2: Samurai's Destiny was originally released on the PlayStation 2. Although it's a sequel to Onimusha: Warlords, the game features a complete new protagonist and supporting cast and the game can be enjoyed without prior experience of the first game.The game improves on various aspects of the original Onimusha: Warlords, increasing the action and replay value thanks to featuring 4 additional playable characters and a branching story. The remaster updates the game to HD format and brings various quality of life changes and extra features. Story & Cast The game tells the story of Jubei Yagyu and his revenge journey againts Oda Nobunaga and his demonic Genma army for the massacre of his clan. During his journey Jubei will meet and cross path with the mysterious woman Oyu, the young ninja Kotaro, the master spearwielder Ekei and the gunslinger Magoichi, they each have their own aims and their own connections that will lead them to fight each other, and sometimes fight together. Experience 100 different scenarios across the game's branching story. Completionist note: it's imposible to see all scenarios in one playthrough, for more details click here. Gameplay Like in the original Onimusha: Warlords, the game features a mix of exploration and combat but now to a greater degree. The player fights using a normal sword but as they progress through the story they will collect an assortment of short and long range weapons, from diverse element-based weapons to bows and firearms. Defeated Genma monsters will provide the player with demon souls that they can absorb to obtain various benefits depending on their color. Yellow souls will restore your health, blue souls restore magic power, red souls can be used to upgrade your gear and the rare purple souls can be used to unleash your 'Onimusha' transformation after absorbing five of them. The player can build and deepen Jubei's relationship with each of his allies by performing certain actions and exchangin gifts of their liking with them, this will unlock special scenarios and eventually giving you control to play as them during certain points of the story. For more details about the Gift Exchange system, click here. New Features & updates​ New "HELL" Mode : an extremely difficult mode where you die in a single hit. Gallery: the gallery from the original now supports higher resolution & zoom functions. Over 100 new special artworks have been added. You can listen all 43 songs of the original soundtrack. All assets updated to high definition Switch between 16:9 and 4:3 aspect ratio on the fly during gameplay. Easy Mode is now available by default. All cutscenes can now be skipped from the start. Mini-games available from the start. Alternative costumes available from the start Added auto-save feature Weapons can be swapped without having to open the menu. Bonuses ​You can get a special outfit for Jubei if you have save data from Onimusha: Warlords. To switch Jubei's outfit select Special Features → Jubei's Outfit and select between Normal and Special from the title-screen menu. This will only alter the appearance. Your status will be the same as the armour you equip in-game. By pre-ordering the game you get the Onimusha 2: Orchestra Album Selection Pack. It includes five tracks selected from the Onimusha 2 Orchestra Album Taro Iwashiro Selection. Select Special Features → Gallery → Original Soundtrack to access these tracks from the title-screen menu. This product is also available as part of the Onimusha bundle. (Acquire this bundle before July 1, 2025, 04:00 (UTC), to receive a limited-time bonus!) You also get a pack of items that contains 3 herbs, 2 medicines, 1 secret medicine, 2 special magic liquid, 1 perfect medicine, 1 talisman and 10,000 red souls. The content will appear after meeting Takajo in the early game. If you have already met Takajo, the content will appear when you select "Load Game". While you can only get this item pack once, you can also get the items in-game. The content listed in the DLC may become available separately at a later date. Bundle ​You can purchase Onimusha: Warlords and Onimusha 2: SamuraI's Destiny together. Bundle links: Media​ Announcement Trailer​Pre-order Announcement Trailer​ ​ Message from the Director​Gameplay with the Director​   Last edited: Yesterday at 8:21 AM Threadmarks Gift Exchange guide New Index OP OP Lucia Member Oct 18, 2021 2,437 Argentina Gift Exchange​ A core gameplay mechanique introduced in Onimusha 2 is the Gift Exchange. Alongside the player's standard item inventory, there exists a separate inventory exclusive for gift items that can be given to Ekei, Magoichi , Kotaro and Oyu. A total of 125 gifts can be found throughout the game, and each will elicit a different response depending on who it is given to. All 125 Gift locations (items name may differ in the remaster). View: https://www.youtube.com/watch?v=6BopXanIz40 As said above, gifts will elicit different response to each character depending on how much they value it, for example the Vodka gift will have an A-rank value for Ekei but a B-rank value for Magoichi. As detailed in the video above, each character has a pool of unique gifts/items per rank that they can give you at random in exchange for a gift of that rank. The video and doc below details what rank value each gift has per character. Doc with each gift rating value (item names may differ in the remaster): https://docs.google.com/spreadsheets/d/1kYJJ7yifduP0IcRArk-xuBqTEVJKnOadTfEdagBEu1I/edit?usp=sharing View: https://www.youtube.com/watch?v=RiGmPPmrPAw  Last edited: Yesterday at 9:36 AM New Index Threadmarks Scenario Route guide New Index OP OP Lucia Member Oct 18, 2021 2,437 Argentina Scenario Route From the Onimusha wiki: While there are many scenarios that are guaranteed to occur throughout the game, many other optional scenarios can be triggered by raising the friendship of one or more characters by repeatedly giving them gifts that elicit positive reactions. These optional scenarios can provide additional character development of a certain sub-character, reward the player with additional items, and can unlock playable sections for those characters, though the playable section for Oyu is mandatory regardless of her friendship. While some optional scenarios can occur on their own, others are a part of a split route, with only one out of multiple scenarios being possible to trigger per-playthrough (E.g: Three separate characters can aid Jubei in the Imasho Gold Mine, but only one can do so per playthrough). However, there are restrictions to this system. Due to the split scenario routes, it is not possible to trigger all scenarios in a single playthrough as there are multiple instances of split scenario routes that can only trigger a single scenario, with it even being possible for none of them to trigger in one case. Another restriction is that even if the friendship level of all four sub-characters is at the minimum level required to trigger their optional scenarios, only one sub-character can have most of their optional scenarios triggered per playthrough, this depending on which sub-character has the highest friendship. The only exceptions are each sub-character's playable sections and some scenarios that also involve whoever has the highest friendship (E.g: Chapter 7-10 only requiring either Ekei or Magoichi to have high enough friendship). As a result of these restriction, at least four separate playthroughs are required to trigger every scenario in the game. -------------------------------- Note: the Scenario Route keeps track of all the scenarios you triggered in previous playthroughs so you can just focus on the ones you missed, you still have to meet their requirements to trigger them in your subsequent plays. The following guides contain spoilers, recommend to read after your first playthrough or for returning old players.   New Index shadowman16 Member Oct 25, 2017 41,569 Magoichi you swine! (for some reason that's been stuck in my head for decades... I love the cast for 2). Very excited to replay this one, it was always one of my absolute favourites in the series... Half because of Gorgandates (legend) and half because I felt legit robbed when you never got to defeat Nobunaga in Oni1.  KyouG Member Oct 26, 2017 642 I loved Onimusha HD, and I have been greatly looking forward to playing this. Will make use of the gift guide on my second playthrough, lol.   Tengrave Avenger Oct 26, 2017 1,108 Great OT! The best Onimusha.   ramenline Member Jan 9, 2019 1,673 Started playing the PS2 version yesterday, I played Oni 1 a few months ago and enjoyed it overall. Nice and breezy with great backgrounds. Will probably save 3 and 4 for when we're closer to Way of the Sword dropping  Aeana Member Oct 25, 2017 7,573 I love this game so much. Super excited.   Sumio Mondo Member Oct 25, 2017 10,746 United Kingdom A PS2 classic returns! Can't wait to play it this weekend.  Western Yokai Member Feb 14, 2025 172 This will not get a physical release, right?   RayCharlizard Member Nov 2, 2017 4,475 Western Yokai said: This will not get a physical release, right? Click to expand... Click to shrink... There isn't one announced but who knows if this gets a Limited Run or something down the line.   AlexDS1996 Member Jul 14, 2022 3,958 Excellent thread! Looking forward to playing it at midnight.   demi Member Oct 27, 2017 16,574 My name is Goooogandantessss   Sumio Mondo Member Oct 25, 2017 10,746 United Kingdom Tengrave said: Great OT! The best Onimusha. Click to expand... Click to shrink...   Chackan Member Oct 31, 2017 5,451 "Juuuubeeeeeeiiii" Fucking finally. Played Onimusha 1 HD when it came out on the Switch, and have been waiting since then for this one! Hope they don't take another 5 or 6 years with Onimusha 3...  ResinPeasant93 Member Apr 24, 2024 2,489 My favorite Onimusha. Still have my PS2 copy   Koivusilta Member Oct 30, 2017 629 Finland The best Onimusha and one of my overall favorite PS2 games, so glad it's finally getting a re-release! Can't wait to dig in tomorrow after work. Completed Clair Obscur just in time, too! Looking at the Motohide Eshiro gameplay video, I'm glad to see they changed the Onimusha transformation so that it's now manually activated like in Onimusha 3, so you don't waste your transformation if you accidentally collect the fifth purple orb. Attack charging is also a bit different now, since the game originally used the pressure sensitive shoulder buttons for it. PS. I really wish they go back and add Genma features into the Warlords remaster, even if it was paid DLC.  G_Shumi One Winged Slayer Member Oct 26, 2017 7,650 Cleveland, OH Great OP! I recently played Onimusha 2 & 3 on PS2 last year, so I'll probably wait for a sale (or an eventual physical release please!). But I do have one sage advice for Onimusha 2: rotate the analog sticks in order to open the heavy door! If you get far enough in the game, you'll know what I mean.  Tagovailoa Member Feb 5, 2023 1,586 Love this game! Just beat Oni 1 remastered in one sitting yesterday while home sick from work. Looking forward to getting to this sometime this weekend. I have beaten this game 5+ times and never got 100% scenario completion.  RiZ IV Member Oct 27, 2017 933 Wow, I didn't realize this was coming out tomorrow. Onimusha 2 was one of my favorite PS2 games. Will definitely pick this up.   GwyndolinCinder Member Oct 26, 2017 5,703 JUBEIIIIIIIIIIII   coldsagging AVALANCHE Member Oct 27, 2017 8,077 Tengrave said: Great OT! The best Onimusha. Click to expand... Click to shrink... Facts.   The Silver Member Oct 28, 2017 11,584 Haven't replayed this in so long. Hope the bring back and expand on the structure of Oni 2 in the new one, it has a lot of potential   Annie85x Member Mar 12, 2020 2,949 Oni 2 was my fav. Super excited to jump back in over the weekend 😍   Timodus Member Oct 27, 2017 383 My first and favorite Onimusha. I'm glad I can finally play it with the Japanese voices.   OP OP Lucia Member Oct 18, 2021 2,437 Argentina https://x.com/OnimushaGame/status/1925673157190463524 @OnimushaGame said: Onimusha 2: Samurai's Destiny launches tomorrow. Prepare to reclaim your destiny! Today, we're celebrating with this amazing piece from @hieumayart featuring our protagonist, Jubei! Click to expand... Click to shrink...   thetrin Member Oct 26, 2017 10,725 Grand Junction, CO Awesome game. Loved it when I played it on PS2. I am curious to see what people who are playing it with fresh eyes think of it.   stn Member Oct 28, 2017 6,414 Definitely getting this! I started playing the OG on PS2, but the controls are so bad that I'll play this instead.   OP OP Lucia Member Oct 18, 2021 2,437 Argentina https://x.com/OnimushaGame/status/1925703394737467771 @OnimushaGame said: The web manual for Onimusha 2: Samurai's Destiny is now live. Check it out to prepare for tomorrow's release! Access the manual here 👇 https://manual.capcom.com/onimusha2/ Click to expand... Click to shrink...   Zor Member Oct 30, 2017 14,095 So I was going to replay the first game before this as I own the remaster, but I just realised I own Genma Onimusha and never ever actually played it. Is Genma considered the best version just for people that like a more difficult experience or do its benefits/improvements range beyond that? Just wondering which the best version of the first is.  LetalisAmare Member Oct 27, 2017 4,363 Just started. The 16:9 is zoomed in or cropped whatever you call it. I'll stick to 4:3.   OP OP Lucia Member Oct 18, 2021 2,437 Argentina Zor said: So I was going to replay the first game before this as I own the remaster, but I just realised I own Genma Onimusha and never ever actually played it. Is Genma considered the best version just for people that like a more difficult experience or do its benefits/improvements range beyond that? Just wondering which the best version of the first is. Click to expand... Click to shrink... Yeah, Genma is the best version of Oni 1 and it's an overall harder game than the OG, it has one new location, 2 new bosses (one of them is a RE-Nemesis type stalker).   Count of Monte Sawed-Off Member Oct 27, 2017 5,057 Best Onimusha.   Zetta The Fallen Oct 25, 2017 8,521 Buying it just to show support and will eventually play it much later on. Hoping this sells a lot so we can get 3.   Jawmuncher Crisis Dino Moderator Oct 25, 2017 44,845 Ibis Island Great OT, fixed the title though. No need to include the platforms in the title since they're in the OP   giancarlo123x One Winged Slayer Member Oct 25, 2017 28,013 $30? That's easy money.   TΛPIVVΛ Member Nov 12, 2017 4,125 Surprised its out! Just crept up on me! View: https://youtu.be/D9joJuEcJAw  Type VII Member Oct 31, 2017 2,980 Downloaded on PS5 and ready to go when I get home from work this evening. It's a shame there's no physical release, but between this and Capcom Fighting Collection 2, I'll be partying like it's the early 2000s all weekend.   Aske The Fallen Oct 25, 2017 6,318 Canadia Golden Evil Statue!!!!!!   AlexDS1996 Member Jul 14, 2022 3,958 I've just played a little over an hour and it's perfect. That counter attack is always satisfying. The game looks great to me and the sound is really nice too.   Tagovailoa Member Feb 5, 2023 1,586 Aske said: Golden Evil Statue!!!!!! Click to expand... Click to shrink... New players are not going to have a good time  Zolbrod Member Oct 27, 2017 3,965 Osaka, Japan By far the best game in the series! Can't wait to play it again!  NovumVeritas Member Oct 26, 2017 11,143 Berlin I just played a little bit docked on Switch, this looks very oversharpend, any one else? Is that the use of the AI filter they used?   Hystzen Member Oct 25, 2017 2,674 Manchester UK It's best onimusha for a 1/3rd of game then they ditch the hub concept and character interactions it turns rushed and bland   OP OP Lucia Member Oct 18, 2021 2,437 Argentina I wish the 4:3 ratio also applied to cutscenes.   Pez Member Oct 28, 2017 1,422 If this gets a physical release, I'm there. Will hold out until then.   joyfoolish Member Aug 25, 2024 197 I was wondering if the PS4 version looks good on PS5? Is it at least 1440p?   Rust Member Jan 24, 2018 1,443 What the heck is this stupid random mini-game? I think I've died more often opening a garage door than throughout the rest of the game. I really enjoyed the first one - samurai game ala Resident Evil? Sign me up! Whereas this one started okay, now it's turned into an incredibly linear experience. I'm hoping it'll change back, but I'm thinking it's entering the final act.  Jawmuncher Crisis Dino Moderator Oct 25, 2017 44,845 Ibis Island Pez said: If this gets a physical release, I'm there. Will hold out until then. Click to expand... Click to shrink... No Physical release is a big hit on this. Especially after they did the 1st game. Not even a Japanese Physical is surprising. Capcom was one of the stronger JP publishers still doing that at least, so it's a shame to see them seemingly ditching it.  Pez Member Oct 28, 2017 1,422 Yeah, they never did them for the DMC games on Switch either. There's a good chance this never gets a physical release. We'll see!  
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Scientists Use DNA to Trace Early Humans' Footsteps From Asia to South America

    New Research

    Scientists Use DNA to Trace Early Humans’ Footsteps From Asia to South America
    Over thousands of years, humans from Eurasia trekked more than 12,400 miles to eventually reach the southernmost tip of South America, a new genetic investigation suggests

    Researchers have used genomic sequencing to trace what they’re calling the “longest migration out of Africa.”
    Nanyang Technological University

    Tens of thousands of years ago, Homo sapiens embarked on a major migration out of Africa and began settling around the world. But exactly how, when and where humans expanded has long been a source of debate.
    Now, researchers have used genomic sequencing to trace what they’re calling the “longest migration out of Africa.” Over the course of many generations and thousands of years, humans from Eurasia trekked more than 12,400 miles to eventually reach the southernmost tip of South America, according to a new paper published in the journal Science.
    In addition to providing insight into human expansion throughout the Americas, the analysis also sheds new light on health differences between populations. In the future, the researchers hope their work will contribute to personalized medical care based on an individual’s genetic profile.
    “only after we know the entire genetic makeup of humanity that we can provide precision medicine that is specific to the needs of every ethnic group, in particular, those that have become endangered and are on the brink of going extinct,” co-author Stephan Schuster, a genomicist at Nanyang Technological University in Singapore, tells the Straits Times’ Judith Tan.

    #NTUsg researchers: Early Asians made the longest human migration in prehistory
    Watch on

    For the study, an international team of scientists analyzed the genomes of 1,537 individuals from 139 ethnic groups in South America and Northeast Eurasia. Comparing this DNA allowed them to reconstruct the human migration from Asia to South America, following the “genetic footprints left behind by the early settlers,” as lead author Elena Gusareva, a biologist at Nanyang Technological University, tells Cosmos magazine’s Evrim Yazgin.
    Modern humans arrived in northern Eurasia around 45,000 years ago. By roughly 31,600 years ago, they had migrated east toward Beringia, the land bridge connecting Asia and North America in what is now the Bering Strait. From there, they walked into present-day Alaska. They expanded across North America and eventually headed into South America, reaching the continent’s northwest tip around 14,000 years ago.
    “Our findings show that Native Americans are descendants of Asian populations, particularly from the West Beringian region,” says study co-author Kim Hie Lim, a genomicist at Nanyang Technological University, to the South China Morning Post’s Victoria Bela.
    This South American group then split into four genetic lineages, the researchers found. One population headed east toward the Dry Chaco region, while another went south to Patagonia. One climbed up into the Andes Mountains, while another remained in the Amazon basin.
    Once the groups split off, they became isolated by the continent’s geography, which reduced their genetic diversity. More specifically, the researchers found a reduced diversity of human leukocyte antigengenes, which help support immune health.
    Reduced genetic diversity may have made these early South Americans more susceptible to diseases introduced by European colonists, the researchers posit.
    “Understanding how ancient populations moved and settled not only helps us understand human history, but also explains how their immune systems adapted to different environments,” Kim tells the Borneo Bulletin.
    Data used in the study came from GenomeAsia 100K, a large-scale project that aims to sequence 100,000 Asian human genomes.
    “Most existing medicines were developed based on studies of European populations, often excluding Indigenous populations,” Kim tells Live Science’s Kristina Killgrove. “It is critical to provide tailored healthcare and disease prevention strategies that consider their specific genetic profiles.”

    Get the latest stories in your inbox every weekday.
    #scientists #use #dna #trace #early
    Scientists Use DNA to Trace Early Humans' Footsteps From Asia to South America
    New Research Scientists Use DNA to Trace Early Humans’ Footsteps From Asia to South America Over thousands of years, humans from Eurasia trekked more than 12,400 miles to eventually reach the southernmost tip of South America, a new genetic investigation suggests Researchers have used genomic sequencing to trace what they’re calling the “longest migration out of Africa.” Nanyang Technological University Tens of thousands of years ago, Homo sapiens embarked on a major migration out of Africa and began settling around the world. But exactly how, when and where humans expanded has long been a source of debate. Now, researchers have used genomic sequencing to trace what they’re calling the “longest migration out of Africa.” Over the course of many generations and thousands of years, humans from Eurasia trekked more than 12,400 miles to eventually reach the southernmost tip of South America, according to a new paper published in the journal Science. In addition to providing insight into human expansion throughout the Americas, the analysis also sheds new light on health differences between populations. In the future, the researchers hope their work will contribute to personalized medical care based on an individual’s genetic profile. “only after we know the entire genetic makeup of humanity that we can provide precision medicine that is specific to the needs of every ethnic group, in particular, those that have become endangered and are on the brink of going extinct,” co-author Stephan Schuster, a genomicist at Nanyang Technological University in Singapore, tells the Straits Times’ Judith Tan. #NTUsg researchers: Early Asians made the longest human migration in prehistory Watch on For the study, an international team of scientists analyzed the genomes of 1,537 individuals from 139 ethnic groups in South America and Northeast Eurasia. Comparing this DNA allowed them to reconstruct the human migration from Asia to South America, following the “genetic footprints left behind by the early settlers,” as lead author Elena Gusareva, a biologist at Nanyang Technological University, tells Cosmos magazine’s Evrim Yazgin. Modern humans arrived in northern Eurasia around 45,000 years ago. By roughly 31,600 years ago, they had migrated east toward Beringia, the land bridge connecting Asia and North America in what is now the Bering Strait. From there, they walked into present-day Alaska. They expanded across North America and eventually headed into South America, reaching the continent’s northwest tip around 14,000 years ago. “Our findings show that Native Americans are descendants of Asian populations, particularly from the West Beringian region,” says study co-author Kim Hie Lim, a genomicist at Nanyang Technological University, to the South China Morning Post’s Victoria Bela. This South American group then split into four genetic lineages, the researchers found. One population headed east toward the Dry Chaco region, while another went south to Patagonia. One climbed up into the Andes Mountains, while another remained in the Amazon basin. Once the groups split off, they became isolated by the continent’s geography, which reduced their genetic diversity. More specifically, the researchers found a reduced diversity of human leukocyte antigengenes, which help support immune health. Reduced genetic diversity may have made these early South Americans more susceptible to diseases introduced by European colonists, the researchers posit. “Understanding how ancient populations moved and settled not only helps us understand human history, but also explains how their immune systems adapted to different environments,” Kim tells the Borneo Bulletin. Data used in the study came from GenomeAsia 100K, a large-scale project that aims to sequence 100,000 Asian human genomes. “Most existing medicines were developed based on studies of European populations, often excluding Indigenous populations,” Kim tells Live Science’s Kristina Killgrove. “It is critical to provide tailored healthcare and disease prevention strategies that consider their specific genetic profiles.” Get the latest stories in your inbox every weekday. #scientists #use #dna #trace #early
    WWW.SMITHSONIANMAG.COM
    Scientists Use DNA to Trace Early Humans' Footsteps From Asia to South America
    New Research Scientists Use DNA to Trace Early Humans’ Footsteps From Asia to South America Over thousands of years, humans from Eurasia trekked more than 12,400 miles to eventually reach the southernmost tip of South America, a new genetic investigation suggests Researchers have used genomic sequencing to trace what they’re calling the “longest migration out of Africa.” Nanyang Technological University Tens of thousands of years ago, Homo sapiens embarked on a major migration out of Africa and began settling around the world. But exactly how, when and where humans expanded has long been a source of debate. Now, researchers have used genomic sequencing to trace what they’re calling the “longest migration out of Africa.” Over the course of many generations and thousands of years, humans from Eurasia trekked more than 12,400 miles to eventually reach the southernmost tip of South America, according to a new paper published in the journal Science. In addition to providing insight into human expansion throughout the Americas, the analysis also sheds new light on health differences between populations. In the future, the researchers hope their work will contribute to personalized medical care based on an individual’s genetic profile. “[It is] only after we know the entire genetic makeup of humanity that we can provide precision medicine that is specific to the needs of every ethnic group, in particular, those that have become endangered and are on the brink of going extinct,” co-author Stephan Schuster, a genomicist at Nanyang Technological University in Singapore, tells the Straits Times’ Judith Tan. #NTUsg researchers: Early Asians made the longest human migration in prehistory Watch on For the study, an international team of scientists analyzed the genomes of 1,537 individuals from 139 ethnic groups in South America and Northeast Eurasia. Comparing this DNA allowed them to reconstruct the human migration from Asia to South America, following the “genetic footprints left behind by the early settlers,” as lead author Elena Gusareva, a biologist at Nanyang Technological University, tells Cosmos magazine’s Evrim Yazgin. Modern humans arrived in northern Eurasia around 45,000 years ago. By roughly 31,600 years ago, they had migrated east toward Beringia, the land bridge connecting Asia and North America in what is now the Bering Strait. From there, they walked into present-day Alaska. They expanded across North America and eventually headed into South America, reaching the continent’s northwest tip around 14,000 years ago. “Our findings show that Native Americans are descendants of Asian populations, particularly from the West Beringian region,” says study co-author Kim Hie Lim, a genomicist at Nanyang Technological University, to the South China Morning Post’s Victoria Bela. This South American group then split into four genetic lineages, the researchers found. One population headed east toward the Dry Chaco region, while another went south to Patagonia. One climbed up into the Andes Mountains, while another remained in the Amazon basin. Once the groups split off, they became isolated by the continent’s geography, which reduced their genetic diversity. More specifically, the researchers found a reduced diversity of human leukocyte antigen (HLA) genes, which help support immune health. Reduced genetic diversity may have made these early South Americans more susceptible to diseases introduced by European colonists, the researchers posit. “Understanding how ancient populations moved and settled not only helps us understand human history, but also explains how their immune systems adapted to different environments,” Kim tells the Borneo Bulletin. Data used in the study came from GenomeAsia 100K, a large-scale project that aims to sequence 100,000 Asian human genomes. “Most existing medicines were developed based on studies of European populations, often excluding Indigenous populations,” Kim tells Live Science’s Kristina Killgrove. “It is critical to provide tailored healthcare and disease prevention strategies that consider their specific genetic profiles.” Get the latest stories in your inbox every weekday.
    0 Yorumlar 0 hisse senetleri 0 önizleme
CGShares https://cgshares.com