• How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 التعليقات 0 المشاركات
  • 15 Dreamy Girly Bedroom Ideas You’ll Want to Steal

    Designing a girly bedroom is about so much more than picking a pretty color. It’s about creating a space that reflects personality, inspires creativity, and feels like a true escape, whether it’s for a little girl, a growing tween, or a style-loving teen. From playful wallpaper tricks to smart storage ideas and cozy reading nooks, the right design choices can turn a simple bedroom into something magical and meaningful.

    In this guide, we’re skipping over-the-top themes and diving into real, creative design tips that anyone can use. Whether you’re decorating from scratch or giving an existing space a fresh update, these 15 girly bedroom ideas will help you build a room that’s both beautiful and completely personal. Let’s get into the ideas that make a room not just look pretty,but feel like home.

    1. Master the Mix-and-Match Look

    Image Source: House Beautiful

    Mixing and matching styles, textures, and prints can create a beautifully curated bedroom full of personality. Instead of sticking to one look, combine modern and vintage pieces or pair graphic prints with soft florals. You might match a velvet headboard with rattan nightstands, or polka dot bedding with a plaid throw. 

    The key is consistency in your color palette,stick to 2–3 dominant hues to make the mix feel intentional. Balance is essential: if you’re using a bold pattern on the bed, keep the walls subtle. This technique creates a room that feels playful, personal, and effortlessly stylish.

    2. Be Creative with Wallpaper

    Image Source: Ghayda Nsour

    Wallpaper can completely transform a room,but don’t stop at the walls! Use it behind shelves, inside closets, on the ceiling, or even on drawer fronts. Choose designs that reflect your personality, like watercolor florals, animal prints, or dreamy clouds. For a modern look, try graphic shapes in soft pastels. Use peel-and-stick wallpaper if you’re renting or want a low-commitment option. Mix one bold feature wall with neutral paint elsewhere to keep the space grounded. Wallpaper isn’t just background,it’s a design statement that can define the whole vibe of the room.

    3. Create a Magical Reading Nook

    Image Source: House Beautiful

    A cozy, magical reading nook makes a bedroom feel like a retreat. Pick a quiet corner by the window or even under a loft bed. Add a plush bean bag, floor cushions, or a hanging chair. Use soft lighting,think fairy lights or a mushroom-shaped lamp,and layer in blankets and pillows. 

    Install a small bookshelf or floating ledges for easy access to books. Add a canopy or sheer curtain for privacy and charm. This tiny space becomes a personal hideaway, perfect for getting lost in a book or daydreaming in comfort.

    4. Keep Things Crisp with White and Neutrals

    Image Source: House Beautiful

    A white or neutral color scheme is timeless, clean, and chic. Use white walls as a canvas, then layer in soft greys, taupes, or blush accents for warmth. Choose bedding with subtle embroidery or ruffles, and use natural textures like linen, cotton, and jute to add depth. 

    Light-colored wood furniture adds to the serene vibe. This look works beautifully in both small and large spaces, as it keeps everything bright and airy. Add interest with small pops of color,like a lavender throw or gold-accented lamp,to keep it from feeling too sterile.

    5. Design a Fairy Tale Hideaway

    Image Source: House Beautiful

    Bring fairy tale magic to life with soft, whimsical touches. Start with pastel or dusty-tone paint,think lilac, blush, or icy blue. Add a canopy over the bed with tulle or lace, and incorporate soft lighting like fairy lights or a tiny chandelier. Choose furniture with elegant curves, like a vintage-inspired vanity or a carved wood headboard. 

    Add elements like star-shaped pillows, storybook art prints, or a tiny dress-up corner. This style isn’t just for little girls,it can be adapted for any age with the right balance of enchantment and elegance.

    6. Try an Unexpected Color Scheme

    Image Source: House Beautiful

    Go beyond typical “girly” colors and experiment with fresh combinations. Try pairing emerald green with blush pink, or mustard yellow with lavender. Using non-traditional combos instantly modernizes the space. 

    To keep it cohesive, let one color dominate while the other plays a supporting role. You can also anchor the palette with neutral base tones like white, grey, or wood textures. Use the fun color in accessories, pillows, rugs, art, and let the secondary color pop through bedding or an accent wall. This bold choice makes the room stand out and feel grown-up and creative.

    7. Make a Statement with an Accent Wall

    Image Source: Samar Gamal

    A bold accent wall can completely elevate a girly bedroom without overwhelming the space. To create a showstopping backdrop, choose a rich color, velvet paneling, or wallpaper with texture or pattern. Framing the wall with architectural elements—like arches or built-in lighting, adds even more drama and depth. This technique works beautifully behind the bed, transforming it into a focal point. Keep the surrounding walls neutral so the accent shines, and tie the rest of the room’s palette into the wall’s tones through bedding, curtains, or rugs. Whether soft or striking, an accent wall sets the tone for the entire space.

    8. Create a Personalized Gallery Wall

    Image Source: Samira Mahmudlu

    Turn a blank wall into a living collage of favorite things. Mix framed art prints, personal photos, inspirational quotes, and even fabric swatches or pressed flowers. Use a variety of frame shapes and sizes for an eclectic look, or keep them uniform for a cleaner style. Arrange everything on the floor first to find the perfect layout before you hang. This gallery wall becomes a rotating story of who she is,what she loves, what inspires her, and where she dreams of going. It’s an easy way to update the space regularly.

    9. Add a Canopy or Curtain Accent

    Image Source: House Beautiful

    Canopies aren’t just for beds, use soft, sheer curtains to frame a reading corner, a vanity, or even an entire wall. Install ceiling hooks or curtain rods to drape the fabric, and layer with twinkle lights for added charm. Choose materials like tulle, gauze, or voile in light pastel tones to keep things dreamy. This instantly gives the room a soft, cozy vibe and creates that “fairy tale” feel without going over the top.

    10. Make Storage Beautiful and Practical

    Image Source: House Beautiful

    Smart storage is essential, but it can also be part of the decor. Use decorative bins in woven, velvet, or metallic finishes. Floating wall cubes can hold books, plants, or collectibles. 

    Opt for under-bed storage drawers or a bed frame with built-in shelves. A cute coat rack, jewelry organizer, or peg rail keeps accessories tidy and stylish. When everything has its place, the room feels more peaceful and easier to enjoy.

    11. Embrace Pink as a Primary Design Element

    Don’t just use pink as an accent, let it lead the entire design. Choose a range of tones like blush, rose, and dusty mauve, then layer them throughout the space: on walls, bedding, furniture, and décor. Vary the textures to prevent the room from feeling flat, think velvet upholstery, cotton bedding, matte finishes, and metallic accents. 

    Pair your pinks with soft neutrals like white, beige, or light wood to balance the color and keep the room light and breathable. Pink doesn’t have to be overly sweet; with the right shades and balance, it feels calm, modern, and elegant. This approach works beautifully for girls’ rooms that want to lean feminine without feeling too “theme-y.”

    12. Use Architectural Curves and Built-In Shapes

    Image Source: Kaiwan Hamza

    Incorporating soft curves in your design instantly adds charm and sophistication. Instead of standard square furniture and sharp lines, opt for arched wall cutouts, rounded shelves, circular reading nooks, and oval mirrors. You can mimic architectural curves through painted arches, custom cabinetry, or even curved headboards. 

    These shapes soften the room’s feel and make it visually unique. For a truly cohesive look, repeat the curve motif across several areas, window treatments, lighting, or even rugs. This technique is especially powerful when paired with soft colors and layered textures, as it creates a space that feels whimsical yet mature.

    13. Stick to the Classics

    Image Source: Sara Al Refai

    There’s a reason some design elements never go out of style,they work. Sticking to the classics means using timeless materials, shapes, and palettes that grow with the child. Think white furniture, soft pink or lavender walls, floral bedding, and elegant drapery. 

    Go for a tufted headboard, framed artwork, and crystal-inspired lighting for a touch of sophistication. These pieces can be updated with accessories as tastes change, but the core elements remain versatile and stylish. This approach also helps future-proof the room, saving time and money on constant redecoration. If you’re unsure where to start, lean into a classic French or vintage-inspired style, delicate moldings, soft patterns, and warm lighting are always a win.

    14. Design with Symmetry for a Polished Look

    Image Source: Menna Hussien

    Symmetry creates balance, calm, and a naturally pleasing layout, especially in shared bedrooms. This image is a perfect example: identical beds, mirrored bedding, and a centered nightstand create harmony and order. To use this concept in a girly bedroom, start by repeating core pieces on each side, beds, lamps, pillows, or wall sconces.

     Choose neutral tones like beige, blush, or ivory to maintain a serene vibe. You can also mirror wall decor or shelving to extend the symmetry across the space. It doesn’t need to be exact, balance can come from visual weight, not just identical pieces. This method works particularly well for siblings, guest rooms, or for a clean and elegant design that feels effortlessly organized.

    15. Design a Minimalistic Girly Bedroom

    Image Source: Miral Tarek

    Minimal doesn’t mean boring, it means intentional. A minimal girly bedroom uses clean lines, soft pastels, and refined details to create a calm, elevated space. Stick to a restrained color palette like blush and powder blue, then let furniture and texture do the talking. Choose sleek pieces: a tufted headboard, elegant side tables, and delicate lighting. Avoid clutter by limiting accessories and keeping surfaces clean. One or two standout piecesadd character without overloading the room. The result is peaceful, polished, and perfect for a girl who prefers subtle over sparkly.

    Finishing Notes

    Designing a girly bedroom isn’t about following trends or sticking to one color—it’s about creating a space that reflects personality, sparks imagination, and grows with time. Whether you’re planning a soft pastel retreat, a bold and modern haven, or something whimsical in between, the ideas shared here are meant to inspire creativity and confidence in your design choices.

    At Home Designing, we believe that every corner of a home, especially a child’s bedroom, should be both beautiful and functional. Our mission is to help you transform everyday spaces into something extraordinary through smart layouts, thoughtful details, and timeless inspiration.
    #dreamy #girly #bedroom #ideas #youll
    15 Dreamy Girly Bedroom Ideas You’ll Want to Steal
    Designing a girly bedroom is about so much more than picking a pretty color. It’s about creating a space that reflects personality, inspires creativity, and feels like a true escape, whether it’s for a little girl, a growing tween, or a style-loving teen. From playful wallpaper tricks to smart storage ideas and cozy reading nooks, the right design choices can turn a simple bedroom into something magical and meaningful. In this guide, we’re skipping over-the-top themes and diving into real, creative design tips that anyone can use. Whether you’re decorating from scratch or giving an existing space a fresh update, these 15 girly bedroom ideas will help you build a room that’s both beautiful and completely personal. Let’s get into the ideas that make a room not just look pretty,but feel like home. 1. Master the Mix-and-Match Look Image Source: House Beautiful Mixing and matching styles, textures, and prints can create a beautifully curated bedroom full of personality. Instead of sticking to one look, combine modern and vintage pieces or pair graphic prints with soft florals. You might match a velvet headboard with rattan nightstands, or polka dot bedding with a plaid throw.  The key is consistency in your color palette,stick to 2–3 dominant hues to make the mix feel intentional. Balance is essential: if you’re using a bold pattern on the bed, keep the walls subtle. This technique creates a room that feels playful, personal, and effortlessly stylish. 2. Be Creative with Wallpaper Image Source: Ghayda Nsour Wallpaper can completely transform a room,but don’t stop at the walls! Use it behind shelves, inside closets, on the ceiling, or even on drawer fronts. Choose designs that reflect your personality, like watercolor florals, animal prints, or dreamy clouds. For a modern look, try graphic shapes in soft pastels. Use peel-and-stick wallpaper if you’re renting or want a low-commitment option. Mix one bold feature wall with neutral paint elsewhere to keep the space grounded. Wallpaper isn’t just background,it’s a design statement that can define the whole vibe of the room. 3. Create a Magical Reading Nook Image Source: House Beautiful A cozy, magical reading nook makes a bedroom feel like a retreat. Pick a quiet corner by the window or even under a loft bed. Add a plush bean bag, floor cushions, or a hanging chair. Use soft lighting,think fairy lights or a mushroom-shaped lamp,and layer in blankets and pillows.  Install a small bookshelf or floating ledges for easy access to books. Add a canopy or sheer curtain for privacy and charm. This tiny space becomes a personal hideaway, perfect for getting lost in a book or daydreaming in comfort. 4. Keep Things Crisp with White and Neutrals Image Source: House Beautiful A white or neutral color scheme is timeless, clean, and chic. Use white walls as a canvas, then layer in soft greys, taupes, or blush accents for warmth. Choose bedding with subtle embroidery or ruffles, and use natural textures like linen, cotton, and jute to add depth.  Light-colored wood furniture adds to the serene vibe. This look works beautifully in both small and large spaces, as it keeps everything bright and airy. Add interest with small pops of color,like a lavender throw or gold-accented lamp,to keep it from feeling too sterile. 5. Design a Fairy Tale Hideaway Image Source: House Beautiful Bring fairy tale magic to life with soft, whimsical touches. Start with pastel or dusty-tone paint,think lilac, blush, or icy blue. Add a canopy over the bed with tulle or lace, and incorporate soft lighting like fairy lights or a tiny chandelier. Choose furniture with elegant curves, like a vintage-inspired vanity or a carved wood headboard.  Add elements like star-shaped pillows, storybook art prints, or a tiny dress-up corner. This style isn’t just for little girls,it can be adapted for any age with the right balance of enchantment and elegance. 6. Try an Unexpected Color Scheme Image Source: House Beautiful Go beyond typical “girly” colors and experiment with fresh combinations. Try pairing emerald green with blush pink, or mustard yellow with lavender. Using non-traditional combos instantly modernizes the space.  To keep it cohesive, let one color dominate while the other plays a supporting role. You can also anchor the palette with neutral base tones like white, grey, or wood textures. Use the fun color in accessories, pillows, rugs, art, and let the secondary color pop through bedding or an accent wall. This bold choice makes the room stand out and feel grown-up and creative. 7. Make a Statement with an Accent Wall Image Source: Samar Gamal A bold accent wall can completely elevate a girly bedroom without overwhelming the space. To create a showstopping backdrop, choose a rich color, velvet paneling, or wallpaper with texture or pattern. Framing the wall with architectural elements—like arches or built-in lighting, adds even more drama and depth. This technique works beautifully behind the bed, transforming it into a focal point. Keep the surrounding walls neutral so the accent shines, and tie the rest of the room’s palette into the wall’s tones through bedding, curtains, or rugs. Whether soft or striking, an accent wall sets the tone for the entire space. 8. Create a Personalized Gallery Wall Image Source: Samira Mahmudlu Turn a blank wall into a living collage of favorite things. Mix framed art prints, personal photos, inspirational quotes, and even fabric swatches or pressed flowers. Use a variety of frame shapes and sizes for an eclectic look, or keep them uniform for a cleaner style. Arrange everything on the floor first to find the perfect layout before you hang. This gallery wall becomes a rotating story of who she is,what she loves, what inspires her, and where she dreams of going. It’s an easy way to update the space regularly. 9. Add a Canopy or Curtain Accent Image Source: House Beautiful Canopies aren’t just for beds, use soft, sheer curtains to frame a reading corner, a vanity, or even an entire wall. Install ceiling hooks or curtain rods to drape the fabric, and layer with twinkle lights for added charm. Choose materials like tulle, gauze, or voile in light pastel tones to keep things dreamy. This instantly gives the room a soft, cozy vibe and creates that “fairy tale” feel without going over the top. 10. Make Storage Beautiful and Practical Image Source: House Beautiful Smart storage is essential, but it can also be part of the decor. Use decorative bins in woven, velvet, or metallic finishes. Floating wall cubes can hold books, plants, or collectibles.  Opt for under-bed storage drawers or a bed frame with built-in shelves. A cute coat rack, jewelry organizer, or peg rail keeps accessories tidy and stylish. When everything has its place, the room feels more peaceful and easier to enjoy. 11. Embrace Pink as a Primary Design Element Don’t just use pink as an accent, let it lead the entire design. Choose a range of tones like blush, rose, and dusty mauve, then layer them throughout the space: on walls, bedding, furniture, and décor. Vary the textures to prevent the room from feeling flat, think velvet upholstery, cotton bedding, matte finishes, and metallic accents.  Pair your pinks with soft neutrals like white, beige, or light wood to balance the color and keep the room light and breathable. Pink doesn’t have to be overly sweet; with the right shades and balance, it feels calm, modern, and elegant. This approach works beautifully for girls’ rooms that want to lean feminine without feeling too “theme-y.” 12. Use Architectural Curves and Built-In Shapes Image Source: Kaiwan Hamza Incorporating soft curves in your design instantly adds charm and sophistication. Instead of standard square furniture and sharp lines, opt for arched wall cutouts, rounded shelves, circular reading nooks, and oval mirrors. You can mimic architectural curves through painted arches, custom cabinetry, or even curved headboards.  These shapes soften the room’s feel and make it visually unique. For a truly cohesive look, repeat the curve motif across several areas, window treatments, lighting, or even rugs. This technique is especially powerful when paired with soft colors and layered textures, as it creates a space that feels whimsical yet mature. 13. Stick to the Classics Image Source: Sara Al Refai There’s a reason some design elements never go out of style,they work. Sticking to the classics means using timeless materials, shapes, and palettes that grow with the child. Think white furniture, soft pink or lavender walls, floral bedding, and elegant drapery.  Go for a tufted headboard, framed artwork, and crystal-inspired lighting for a touch of sophistication. These pieces can be updated with accessories as tastes change, but the core elements remain versatile and stylish. This approach also helps future-proof the room, saving time and money on constant redecoration. If you’re unsure where to start, lean into a classic French or vintage-inspired style, delicate moldings, soft patterns, and warm lighting are always a win. 14. Design with Symmetry for a Polished Look Image Source: Menna Hussien Symmetry creates balance, calm, and a naturally pleasing layout, especially in shared bedrooms. This image is a perfect example: identical beds, mirrored bedding, and a centered nightstand create harmony and order. To use this concept in a girly bedroom, start by repeating core pieces on each side, beds, lamps, pillows, or wall sconces.  Choose neutral tones like beige, blush, or ivory to maintain a serene vibe. You can also mirror wall decor or shelving to extend the symmetry across the space. It doesn’t need to be exact, balance can come from visual weight, not just identical pieces. This method works particularly well for siblings, guest rooms, or for a clean and elegant design that feels effortlessly organized. 15. Design a Minimalistic Girly Bedroom Image Source: Miral Tarek Minimal doesn’t mean boring, it means intentional. A minimal girly bedroom uses clean lines, soft pastels, and refined details to create a calm, elevated space. Stick to a restrained color palette like blush and powder blue, then let furniture and texture do the talking. Choose sleek pieces: a tufted headboard, elegant side tables, and delicate lighting. Avoid clutter by limiting accessories and keeping surfaces clean. One or two standout piecesadd character without overloading the room. The result is peaceful, polished, and perfect for a girl who prefers subtle over sparkly. Finishing Notes Designing a girly bedroom isn’t about following trends or sticking to one color—it’s about creating a space that reflects personality, sparks imagination, and grows with time. Whether you’re planning a soft pastel retreat, a bold and modern haven, or something whimsical in between, the ideas shared here are meant to inspire creativity and confidence in your design choices. At Home Designing, we believe that every corner of a home, especially a child’s bedroom, should be both beautiful and functional. Our mission is to help you transform everyday spaces into something extraordinary through smart layouts, thoughtful details, and timeless inspiration. #dreamy #girly #bedroom #ideas #youll
    WWW.HOME-DESIGNING.COM
    15 Dreamy Girly Bedroom Ideas You’ll Want to Steal
    Designing a girly bedroom is about so much more than picking a pretty color. It’s about creating a space that reflects personality, inspires creativity, and feels like a true escape, whether it’s for a little girl, a growing tween, or a style-loving teen. From playful wallpaper tricks to smart storage ideas and cozy reading nooks, the right design choices can turn a simple bedroom into something magical and meaningful. In this guide, we’re skipping over-the-top themes and diving into real, creative design tips that anyone can use. Whether you’re decorating from scratch or giving an existing space a fresh update, these 15 girly bedroom ideas will help you build a room that’s both beautiful and completely personal. Let’s get into the ideas that make a room not just look pretty,but feel like home. 1. Master the Mix-and-Match Look Image Source: House Beautiful Mixing and matching styles, textures, and prints can create a beautifully curated bedroom full of personality. Instead of sticking to one look, combine modern and vintage pieces or pair graphic prints with soft florals. You might match a velvet headboard with rattan nightstands, or polka dot bedding with a plaid throw.  The key is consistency in your color palette,stick to 2–3 dominant hues to make the mix feel intentional. Balance is essential: if you’re using a bold pattern on the bed, keep the walls subtle. This technique creates a room that feels playful, personal, and effortlessly stylish. 2. Be Creative with Wallpaper Image Source: Ghayda Nsour Wallpaper can completely transform a room,but don’t stop at the walls! Use it behind shelves, inside closets, on the ceiling, or even on drawer fronts. Choose designs that reflect your personality, like watercolor florals, animal prints, or dreamy clouds. For a modern look, try graphic shapes in soft pastels. Use peel-and-stick wallpaper if you’re renting or want a low-commitment option. Mix one bold feature wall with neutral paint elsewhere to keep the space grounded. Wallpaper isn’t just background,it’s a design statement that can define the whole vibe of the room. 3. Create a Magical Reading Nook Image Source: House Beautiful A cozy, magical reading nook makes a bedroom feel like a retreat. Pick a quiet corner by the window or even under a loft bed. Add a plush bean bag, floor cushions, or a hanging chair. Use soft lighting,think fairy lights or a mushroom-shaped lamp,and layer in blankets and pillows.  Install a small bookshelf or floating ledges for easy access to books. Add a canopy or sheer curtain for privacy and charm. This tiny space becomes a personal hideaway, perfect for getting lost in a book or daydreaming in comfort. 4. Keep Things Crisp with White and Neutrals Image Source: House Beautiful A white or neutral color scheme is timeless, clean, and chic. Use white walls as a canvas, then layer in soft greys, taupes, or blush accents for warmth. Choose bedding with subtle embroidery or ruffles, and use natural textures like linen, cotton, and jute to add depth.  Light-colored wood furniture adds to the serene vibe. This look works beautifully in both small and large spaces, as it keeps everything bright and airy. Add interest with small pops of color,like a lavender throw or gold-accented lamp,to keep it from feeling too sterile. 5. Design a Fairy Tale Hideaway Image Source: House Beautiful Bring fairy tale magic to life with soft, whimsical touches. Start with pastel or dusty-tone paint,think lilac, blush, or icy blue. Add a canopy over the bed with tulle or lace, and incorporate soft lighting like fairy lights or a tiny chandelier. Choose furniture with elegant curves, like a vintage-inspired vanity or a carved wood headboard.  Add elements like star-shaped pillows, storybook art prints, or a tiny dress-up corner. This style isn’t just for little girls,it can be adapted for any age with the right balance of enchantment and elegance. 6. Try an Unexpected Color Scheme Image Source: House Beautiful Go beyond typical “girly” colors and experiment with fresh combinations. Try pairing emerald green with blush pink, or mustard yellow with lavender. Using non-traditional combos instantly modernizes the space.  To keep it cohesive, let one color dominate while the other plays a supporting role. You can also anchor the palette with neutral base tones like white, grey, or wood textures. Use the fun color in accessories, pillows, rugs, art, and let the secondary color pop through bedding or an accent wall. This bold choice makes the room stand out and feel grown-up and creative. 7. Make a Statement with an Accent Wall Image Source: Samar Gamal A bold accent wall can completely elevate a girly bedroom without overwhelming the space. To create a showstopping backdrop, choose a rich color (like plum or mauve), velvet paneling, or wallpaper with texture or pattern. Framing the wall with architectural elements—like arches or built-in lighting, adds even more drama and depth. This technique works beautifully behind the bed, transforming it into a focal point. Keep the surrounding walls neutral so the accent shines, and tie the rest of the room’s palette into the wall’s tones through bedding, curtains, or rugs. Whether soft or striking, an accent wall sets the tone for the entire space. 8. Create a Personalized Gallery Wall Image Source: Samira Mahmudlu Turn a blank wall into a living collage of favorite things. Mix framed art prints, personal photos, inspirational quotes, and even fabric swatches or pressed flowers. Use a variety of frame shapes and sizes for an eclectic look, or keep them uniform for a cleaner style. Arrange everything on the floor first to find the perfect layout before you hang. This gallery wall becomes a rotating story of who she is,what she loves, what inspires her, and where she dreams of going. It’s an easy way to update the space regularly. 9. Add a Canopy or Curtain Accent Image Source: House Beautiful Canopies aren’t just for beds, use soft, sheer curtains to frame a reading corner, a vanity, or even an entire wall. Install ceiling hooks or curtain rods to drape the fabric, and layer with twinkle lights for added charm. Choose materials like tulle, gauze, or voile in light pastel tones to keep things dreamy. This instantly gives the room a soft, cozy vibe and creates that “fairy tale” feel without going over the top. 10. Make Storage Beautiful and Practical Image Source: House Beautiful Smart storage is essential, but it can also be part of the decor. Use decorative bins in woven, velvet, or metallic finishes. Floating wall cubes can hold books, plants, or collectibles.  Opt for under-bed storage drawers or a bed frame with built-in shelves. A cute coat rack, jewelry organizer, or peg rail keeps accessories tidy and stylish. When everything has its place, the room feels more peaceful and easier to enjoy. 11. Embrace Pink as a Primary Design Element Don’t just use pink as an accent, let it lead the entire design. Choose a range of tones like blush, rose, and dusty mauve, then layer them throughout the space: on walls, bedding, furniture, and décor. Vary the textures to prevent the room from feeling flat, think velvet upholstery, cotton bedding, matte finishes, and metallic accents.  Pair your pinks with soft neutrals like white, beige, or light wood to balance the color and keep the room light and breathable. Pink doesn’t have to be overly sweet; with the right shades and balance, it feels calm, modern, and elegant. This approach works beautifully for girls’ rooms that want to lean feminine without feeling too “theme-y.” 12. Use Architectural Curves and Built-In Shapes Image Source: Kaiwan Hamza Incorporating soft curves in your design instantly adds charm and sophistication. Instead of standard square furniture and sharp lines, opt for arched wall cutouts, rounded shelves, circular reading nooks, and oval mirrors. You can mimic architectural curves through painted arches, custom cabinetry, or even curved headboards.  These shapes soften the room’s feel and make it visually unique. For a truly cohesive look, repeat the curve motif across several areas, window treatments, lighting, or even rugs. This technique is especially powerful when paired with soft colors and layered textures, as it creates a space that feels whimsical yet mature. 13. Stick to the Classics Image Source: Sara Al Refai There’s a reason some design elements never go out of style,they work. Sticking to the classics means using timeless materials, shapes, and palettes that grow with the child. Think white furniture, soft pink or lavender walls, floral bedding, and elegant drapery.  Go for a tufted headboard, framed artwork, and crystal-inspired lighting for a touch of sophistication. These pieces can be updated with accessories as tastes change, but the core elements remain versatile and stylish. This approach also helps future-proof the room, saving time and money on constant redecoration. If you’re unsure where to start, lean into a classic French or vintage-inspired style, delicate moldings, soft patterns, and warm lighting are always a win. 14. Design with Symmetry for a Polished Look Image Source: Menna Hussien Symmetry creates balance, calm, and a naturally pleasing layout, especially in shared bedrooms. This image is a perfect example: identical beds, mirrored bedding, and a centered nightstand create harmony and order. To use this concept in a girly bedroom, start by repeating core pieces on each side, beds, lamps, pillows, or wall sconces.  Choose neutral tones like beige, blush, or ivory to maintain a serene vibe. You can also mirror wall decor or shelving to extend the symmetry across the space. It doesn’t need to be exact, balance can come from visual weight, not just identical pieces. This method works particularly well for siblings, guest rooms, or for a clean and elegant design that feels effortlessly organized. 15. Design a Minimalistic Girly Bedroom Image Source: Miral Tarek Minimal doesn’t mean boring, it means intentional. A minimal girly bedroom uses clean lines, soft pastels, and refined details to create a calm, elevated space. Stick to a restrained color palette like blush and powder blue, then let furniture and texture do the talking. Choose sleek pieces: a tufted headboard, elegant side tables, and delicate lighting. Avoid clutter by limiting accessories and keeping surfaces clean. One or two standout pieces (like a floral painting or sculpted ceiling fixture) add character without overloading the room. The result is peaceful, polished, and perfect for a girl who prefers subtle over sparkly. Finishing Notes Designing a girly bedroom isn’t about following trends or sticking to one color—it’s about creating a space that reflects personality, sparks imagination, and grows with time. Whether you’re planning a soft pastel retreat, a bold and modern haven, or something whimsical in between, the ideas shared here are meant to inspire creativity and confidence in your design choices. At Home Designing, we believe that every corner of a home, especially a child’s bedroom, should be both beautiful and functional. Our mission is to help you transform everyday spaces into something extraordinary through smart layouts, thoughtful details, and timeless inspiration.
    Like
    Love
    Wow
    Sad
    Angry
    130
    0 التعليقات 0 المشاركات
  • Is Nightreign Solo Play Really Impossible?

    Elden Ring Nightreign is a tough-as-nails game that blends the beloved roguelike and soulslike genres into something fans of both should find appealing. However, unlike most games in either genre, this one’s inherently designed around working together in a group of three. So, you may be wondering if you can strike out on your own in Elden Ring Nightreign. While the game is about to get easier for folks who choose to go it alone, right now such a style proves an exceptionally difficult challenge.Suggested ReadingThe Most Sought After Elden Ring Sword Has A Storied History

    Share SubtitlesOffEnglishview videoSuggested ReadingThe Most Sought After Elden Ring Sword Has A Storied History

    Share SubtitlesOffEnglish Elden Ring Nightreign solo?Let’s get this out of the way first: Yes, Elden Ring Nightreign offers the option for solo play. To do so, you’ll need to open the expedition menu at Roundtable Hold, then switch over to the matchmaking settings tab. At the bottom of the menu, set the Expedition Type to “Singleplayer.”The real question is whether Elden Ring Nightreign’s single-player experience is manageable or fun, and that really depends on your skill level, class choice, and patience more so than in any other similar game I can remember playing. If you really want to go at it by yourself,Ironeye or Wylder.Elden Ring Nightreign is already pretty damn challenging when running with a group of three other folks. The game’s sense of randomness adds a lot of unknowns to an expedition, and things can go wrong very quickly. But with a team, you can be revived, have someone else available to take some aggro from you when things get hairy, and use your character’s abilities to complement one another in difficult showdowns. It’s often still hard as hell, but victory usually feels possible even when things don’t go quite as planned.However, when you’re alone…Well, you’re all alone. If you die on a solo expedition, that’s it. You’re done. Back to the Roundable Hold with you, loser.With this in mind, some folks may find the anxiety-inducing pacing and chaotic showdowns enjoyable even while solo; but those who struggle to succeed without a group may find it demoralizing to watch hours go by without making any meaningful progress. And since some classes are much better for solo play than others, it can be even more frustrating to go it alone for someone who prefers to play one of the support-focused classes.If you really want to go at it by yourself, I’d recommend taking a look at Ironeye or Wylder.Ironeye’s ranged playstyle is the safest in the game, giving you a lot of freedom to tackle enemies your own way. For instance, you can take the high ground against some foes to avoid their attacks altogether, or use his sliding ability to dodge an attack and get behind an enemy for better positioning.Wylder, meanwhile, is a jack-of-all-trades character with a solid health pool and balanced stats that make him great at adapting to whatever type of loot a run provides. Simply grab any melee weapon and you’ll probably be doing alright with this fella. Plus, he has some of the coolest skins in the game. That doesn’t help you in battle, but like…come on. He looks rad.In conclusion, while things can certainly go poorly even with a team, I’d argue playing by your lonesome leaves too little room for error for a game that requires such a hefty time investment and minimal payoff for failure. Elden Ring Nightreign is designed from the ground up to be played with others, after all. Your mileage may vary, though, so play however you have fun with it! You can pick up Nightreign now on PS5, Xbox Series X/S, and Windows PCs. You’ll have to look elsewhere to pick up two other friends to play with, though.
    #nightreign #solo #play #really #impossible
    Is Nightreign Solo Play Really Impossible?
    Elden Ring Nightreign is a tough-as-nails game that blends the beloved roguelike and soulslike genres into something fans of both should find appealing. However, unlike most games in either genre, this one’s inherently designed around working together in a group of three. So, you may be wondering if you can strike out on your own in Elden Ring Nightreign. While the game is about to get easier for folks who choose to go it alone, right now such a style proves an exceptionally difficult challenge.Suggested ReadingThe Most Sought After Elden Ring Sword Has A Storied History Share SubtitlesOffEnglishview videoSuggested ReadingThe Most Sought After Elden Ring Sword Has A Storied History Share SubtitlesOffEnglish Elden Ring Nightreign solo?Let’s get this out of the way first: Yes, Elden Ring Nightreign offers the option for solo play. To do so, you’ll need to open the expedition menu at Roundtable Hold, then switch over to the matchmaking settings tab. At the bottom of the menu, set the Expedition Type to “Singleplayer.”The real question is whether Elden Ring Nightreign’s single-player experience is manageable or fun, and that really depends on your skill level, class choice, and patience more so than in any other similar game I can remember playing. If you really want to go at it by yourself,Ironeye or Wylder.Elden Ring Nightreign is already pretty damn challenging when running with a group of three other folks. The game’s sense of randomness adds a lot of unknowns to an expedition, and things can go wrong very quickly. But with a team, you can be revived, have someone else available to take some aggro from you when things get hairy, and use your character’s abilities to complement one another in difficult showdowns. It’s often still hard as hell, but victory usually feels possible even when things don’t go quite as planned.However, when you’re alone…Well, you’re all alone. If you die on a solo expedition, that’s it. You’re done. Back to the Roundable Hold with you, loser.With this in mind, some folks may find the anxiety-inducing pacing and chaotic showdowns enjoyable even while solo; but those who struggle to succeed without a group may find it demoralizing to watch hours go by without making any meaningful progress. And since some classes are much better for solo play than others, it can be even more frustrating to go it alone for someone who prefers to play one of the support-focused classes.If you really want to go at it by yourself, I’d recommend taking a look at Ironeye or Wylder.Ironeye’s ranged playstyle is the safest in the game, giving you a lot of freedom to tackle enemies your own way. For instance, you can take the high ground against some foes to avoid their attacks altogether, or use his sliding ability to dodge an attack and get behind an enemy for better positioning.Wylder, meanwhile, is a jack-of-all-trades character with a solid health pool and balanced stats that make him great at adapting to whatever type of loot a run provides. Simply grab any melee weapon and you’ll probably be doing alright with this fella. Plus, he has some of the coolest skins in the game. That doesn’t help you in battle, but like…come on. He looks rad.In conclusion, while things can certainly go poorly even with a team, I’d argue playing by your lonesome leaves too little room for error for a game that requires such a hefty time investment and minimal payoff for failure. Elden Ring Nightreign is designed from the ground up to be played with others, after all. Your mileage may vary, though, so play however you have fun with it! You can pick up Nightreign now on PS5, Xbox Series X/S, and Windows PCs. You’ll have to look elsewhere to pick up two other friends to play with, though. #nightreign #solo #play #really #impossible
    KOTAKU.COM
    Is Nightreign Solo Play Really Impossible?
    Elden Ring Nightreign is a tough-as-nails game that blends the beloved roguelike and soulslike genres into something fans of both should find appealing. However, unlike most games in either genre, this one’s inherently designed around working together in a group of three. So, you may be wondering if you can strike out on your own in Elden Ring Nightreign. While the game is about to get easier for folks who choose to go it alone, right now such a style proves an exceptionally difficult challenge.Suggested ReadingThe Most Sought After Elden Ring Sword Has A Storied History Share SubtitlesOffEnglishview videoSuggested ReadingThe Most Sought After Elden Ring Sword Has A Storied History Share SubtitlesOffEnglish Elden Ring Nightreign solo?Let’s get this out of the way first: Yes, Elden Ring Nightreign offers the option for solo play. To do so, you’ll need to open the expedition menu at Roundtable Hold, then switch over to the matchmaking settings tab. At the bottom of the menu, set the Expedition Type to “Singleplayer.”The real question is whether Elden Ring Nightreign’s single-player experience is manageable or fun, and that really depends on your skill level, class choice, and patience more so than in any other similar game I can remember playing. If you really want to go at it by yourself, [play as] Ironeye or Wylder.Elden Ring Nightreign is already pretty damn challenging when running with a group of three other folks. The game’s sense of randomness adds a lot of unknowns to an expedition, and things can go wrong very quickly. But with a team, you can be revived, have someone else available to take some aggro from you when things get hairy, and use your character’s abilities to complement one another in difficult showdowns. It’s often still hard as hell, but victory usually feels possible even when things don’t go quite as planned.However, when you’re alone…Well, you’re all alone. If you die on a solo expedition, that’s it. You’re done. Back to the Roundable Hold with you, loser.With this in mind, some folks may find the anxiety-inducing pacing and chaotic showdowns enjoyable even while solo; but those who struggle to succeed without a group may find it demoralizing to watch hours go by without making any meaningful progress. And since some classes are much better for solo play than others, it can be even more frustrating to go it alone for someone who prefers to play one of the support-focused classes.If you really want to go at it by yourself, I’d recommend taking a look at Ironeye or Wylder.Ironeye’s ranged playstyle is the safest in the game, giving you a lot of freedom to tackle enemies your own way. For instance, you can take the high ground against some foes to avoid their attacks altogether, or use his sliding ability to dodge an attack and get behind an enemy for better positioning.Wylder, meanwhile, is a jack-of-all-trades character with a solid health pool and balanced stats that make him great at adapting to whatever type of loot a run provides. Simply grab any melee weapon and you’ll probably be doing alright with this fella. Plus, he has some of the coolest skins in the game. That doesn’t help you in battle, but like…come on. He looks rad.In conclusion, while things can certainly go poorly even with a team, I’d argue playing by your lonesome leaves too little room for error for a game that requires such a hefty time investment and minimal payoff for failure. Elden Ring Nightreign is designed from the ground up to be played with others, after all. Your mileage may vary, though, so play however you have fun with it! You can pick up Nightreign now on PS5, Xbox Series X/S, and Windows PCs. You’ll have to look elsewhere to pick up two other friends to play with, though.
    0 التعليقات 0 المشاركات
  • How to play Duchess in Elden Ring Nightreign

    The Duchess is one of the sharpest classes in Elden Ring Nightreign. The undercover priestess loves to dip in and out of combat, overwhelming foes with fast attacks and status ailments.

    Once you unlock the Duchess, pick her if you like to stay nimble and quickly dominate foes with excessively high damage per second. While her damage potential is one of the highest, Duchess has some apparent weaknesses that can diminish her viability.

    If you’re an aspiring Duchess main who wants to get the best out of the character in your future expeditions, this Elden Ring Nightreign guide will show you how to play as the Duchess, with a focus on recommendations for her best relics, best teammates, and best weapons, alongside other miscellaneous tips.

    How to make a great Duchess build in Elden Ring Nightreign

    As a dextrous character, you’ll want to craft your Duchess build around daggers, katanas, and curved great swords. All in all, anything that has a fast move set and is able to apply bleed is beneficial to a Duchess build. Bleed just overall synergizes with Duchess’s Restage ability, so when in doubt, prioritize looking for weapons and relics that allow for bleed application.

    Since Duchess scales with intelligence and faith as well, she does well with most of the game’s ranged weapon options. We recommend a good bow or staff in your equipment loadout.

    Best relics for Duchess in Elden Ring Nightreign

    Duchess scales primarily off of Arcane, Faith, and Dexterity, so the best relics for her are those that provide boosts to those specific stats. You can unlock these relics by just playing the game, but the best relics come from completing runs and defeating Nightlords. Another great way to gain access to high-quality relics is by completing the remembrance objectives found in the journal.

    Some of the Duchess’ best options are relics with the following effects:Dagger chain attack reprises event upon nearby enemiesImproved character skill attack powerDefeating enemies while Art is active ups attack powerBecome difficult to spot and silence footsteps after landing critical from behind

    Boosts attack power of added affinity attacks

    Improved stance breaking when wielding two armaments

    Any relic that increases Dexterity, Intelligence, or Endurance

    Character skill cooldown reduction

    Starting armament inflicts are good relics as well if you can match the status ailment with a Nightlord weakness

    Best teammates for Duchess in Elden Ring Nightreign

    Duchess excels when she’s able to deal damage unimpeded. She fits neatly into team comps that create enough space for her to do as she pleases. As the game evolves, new strategies may emerge, but at launch, the following classes are great fits as teammates for the Duchess and are likely to remain so for the foreseeable future.

    Guardian — Guardian’s ultimate art provides a useful damage negation buff to teammates in its radius, helping Duchess with her survivability.

    Raider — Debatably Nightreign’s tankiest character, he can easily handle enemy aggro, allowing for Duchess to set up good uses of her Restage ability.

    Wylder — Wylder’s character skill allows him to grapple enemies to him. In the early stages of an expedition, Duchess is at her weakest. A good Wylder can help mitigate enemy aggro by yanking them away from her.

    Duchess — Having multiple Duchess players is not as good as the other picks; however, there is an unusual synergy with her Restage character skill. Since it applies to allies’ damage as well as her own, multiple Duchess players can rapidly apply status ailments like Bleed and demolish bosses.

    Best weapons for Duchess in Elden Ring Nightreign

    Hands down, the best weapon for Duchess is a dagger, as Duchess prefers to build up status ailments as quickly as possible. To that end, it’s best to equip her with weapons that have fast move sets.

    That said, even though she’s a dextrous character, she also has good intelligence scaling. Staves are extremely underrated on Duchess. Since there’s no equipment load or stat requirements outside of levels, there is simply no reason not to have at least one good staff member with you at all times. Below are some of the best weapons for Duchess:

    Crystal Knife

    Reduvia

    Wakizashi

    Moonveil

    Rivers of Blood

    Meteoric Ore Blade

    Horned Bow

    Carian Regal Scepter

    Best talismans for Duchess in Elden Ring Nightreign

    It isn’t often you’ll run into a talisman during an expedition, but if you get lucky, they can completely dial up the effectiveness of any character. Be on the lookout for scarabs, a returning enemy from Elden Ring, who tend to drop them.

    The best talismans for the Duchess are the following:

    Millicent’s Prosthesis — Boosts attack power with successive attacks.

    Twinblade Talisman — Boosts the power of chain attack finishers.

    Lord of Blood’s Exultation — Boosts attack power when blood loss is in the vicinity.

    Depending on your playstyle, you might even prefer to use talismans that increase spell casting power, such as Graven-School Talisman or Radagon Icon. While it may not be her best option, it is still viable, and being able to adapt on the fly is the best skill you could have in Elden Ring Nightreign.

    For more Elden Ring Nightreign guides, here’s a list of all classes, the best class to pick first, how to unlock the Revenant, and the best early Recluse build.
    #how #play #duchess #elden #ring
    How to play Duchess in Elden Ring Nightreign
    The Duchess is one of the sharpest classes in Elden Ring Nightreign. The undercover priestess loves to dip in and out of combat, overwhelming foes with fast attacks and status ailments. Once you unlock the Duchess, pick her if you like to stay nimble and quickly dominate foes with excessively high damage per second. While her damage potential is one of the highest, Duchess has some apparent weaknesses that can diminish her viability. If you’re an aspiring Duchess main who wants to get the best out of the character in your future expeditions, this Elden Ring Nightreign guide will show you how to play as the Duchess, with a focus on recommendations for her best relics, best teammates, and best weapons, alongside other miscellaneous tips. How to make a great Duchess build in Elden Ring Nightreign As a dextrous character, you’ll want to craft your Duchess build around daggers, katanas, and curved great swords. All in all, anything that has a fast move set and is able to apply bleed is beneficial to a Duchess build. Bleed just overall synergizes with Duchess’s Restage ability, so when in doubt, prioritize looking for weapons and relics that allow for bleed application. Since Duchess scales with intelligence and faith as well, she does well with most of the game’s ranged weapon options. We recommend a good bow or staff in your equipment loadout. Best relics for Duchess in Elden Ring Nightreign Duchess scales primarily off of Arcane, Faith, and Dexterity, so the best relics for her are those that provide boosts to those specific stats. You can unlock these relics by just playing the game, but the best relics come from completing runs and defeating Nightlords. Another great way to gain access to high-quality relics is by completing the remembrance objectives found in the journal. Some of the Duchess’ best options are relics with the following effects:Dagger chain attack reprises event upon nearby enemiesImproved character skill attack powerDefeating enemies while Art is active ups attack powerBecome difficult to spot and silence footsteps after landing critical from behind Boosts attack power of added affinity attacks Improved stance breaking when wielding two armaments Any relic that increases Dexterity, Intelligence, or Endurance Character skill cooldown reduction Starting armament inflicts are good relics as well if you can match the status ailment with a Nightlord weakness Best teammates for Duchess in Elden Ring Nightreign Duchess excels when she’s able to deal damage unimpeded. She fits neatly into team comps that create enough space for her to do as she pleases. As the game evolves, new strategies may emerge, but at launch, the following classes are great fits as teammates for the Duchess and are likely to remain so for the foreseeable future. Guardian — Guardian’s ultimate art provides a useful damage negation buff to teammates in its radius, helping Duchess with her survivability. Raider — Debatably Nightreign’s tankiest character, he can easily handle enemy aggro, allowing for Duchess to set up good uses of her Restage ability. Wylder — Wylder’s character skill allows him to grapple enemies to him. In the early stages of an expedition, Duchess is at her weakest. A good Wylder can help mitigate enemy aggro by yanking them away from her. Duchess — Having multiple Duchess players is not as good as the other picks; however, there is an unusual synergy with her Restage character skill. Since it applies to allies’ damage as well as her own, multiple Duchess players can rapidly apply status ailments like Bleed and demolish bosses. Best weapons for Duchess in Elden Ring Nightreign Hands down, the best weapon for Duchess is a dagger, as Duchess prefers to build up status ailments as quickly as possible. To that end, it’s best to equip her with weapons that have fast move sets. That said, even though she’s a dextrous character, she also has good intelligence scaling. Staves are extremely underrated on Duchess. Since there’s no equipment load or stat requirements outside of levels, there is simply no reason not to have at least one good staff member with you at all times. Below are some of the best weapons for Duchess: Crystal Knife Reduvia Wakizashi Moonveil Rivers of Blood Meteoric Ore Blade Horned Bow Carian Regal Scepter Best talismans for Duchess in Elden Ring Nightreign It isn’t often you’ll run into a talisman during an expedition, but if you get lucky, they can completely dial up the effectiveness of any character. Be on the lookout for scarabs, a returning enemy from Elden Ring, who tend to drop them. The best talismans for the Duchess are the following: Millicent’s Prosthesis — Boosts attack power with successive attacks. Twinblade Talisman — Boosts the power of chain attack finishers. Lord of Blood’s Exultation — Boosts attack power when blood loss is in the vicinity. Depending on your playstyle, you might even prefer to use talismans that increase spell casting power, such as Graven-School Talisman or Radagon Icon. While it may not be her best option, it is still viable, and being able to adapt on the fly is the best skill you could have in Elden Ring Nightreign. For more Elden Ring Nightreign guides, here’s a list of all classes, the best class to pick first, how to unlock the Revenant, and the best early Recluse build. #how #play #duchess #elden #ring
    WWW.POLYGON.COM
    How to play Duchess in Elden Ring Nightreign
    The Duchess is one of the sharpest classes in Elden Ring Nightreign. The undercover priestess loves to dip in and out of combat, overwhelming foes with fast attacks and status ailments. Once you unlock the Duchess, pick her if you like to stay nimble and quickly dominate foes with excessively high damage per second. While her damage potential is one of the highest, Duchess has some apparent weaknesses that can diminish her viability. If you’re an aspiring Duchess main who wants to get the best out of the character in your future expeditions, this Elden Ring Nightreign guide will show you how to play as the Duchess, with a focus on recommendations for her best relics, best teammates, and best weapons, alongside other miscellaneous tips. How to make a great Duchess build in Elden Ring Nightreign As a dextrous character, you’ll want to craft your Duchess build around daggers, katanas, and curved great swords. All in all, anything that has a fast move set and is able to apply bleed is beneficial to a Duchess build. Bleed just overall synergizes with Duchess’s Restage ability, so when in doubt, prioritize looking for weapons and relics that allow for bleed application. Since Duchess scales with intelligence and faith as well, she does well with most of the game’s ranged weapon options. We recommend a good bow or staff in your equipment loadout. Best relics for Duchess in Elden Ring Nightreign Duchess scales primarily off of Arcane, Faith, and Dexterity, so the best relics for her are those that provide boosts to those specific stats. You can unlock these relics by just playing the game, but the best relics come from completing runs and defeating Nightlords. Another great way to gain access to high-quality relics is by completing the remembrance objectives found in the journal. Some of the Duchess’ best options are relics with the following effects: [Duchess] Dagger chain attack reprises event upon nearby enemies [Duchess] Improved character skill attack power [Duchess] Defeating enemies while Art is active ups attack power [Duchess] Become difficult to spot and silence footsteps after landing critical from behind Boosts attack power of added affinity attacks Improved stance breaking when wielding two armaments Any relic that increases Dexterity, Intelligence, or Endurance Character skill cooldown reduction Starting armament inflicts are good relics as well if you can match the status ailment with a Nightlord weakness Best teammates for Duchess in Elden Ring Nightreign Duchess excels when she’s able to deal damage unimpeded. She fits neatly into team comps that create enough space for her to do as she pleases. As the game evolves, new strategies may emerge, but at launch, the following classes are great fits as teammates for the Duchess and are likely to remain so for the foreseeable future. Guardian — Guardian’s ultimate art provides a useful damage negation buff to teammates in its radius, helping Duchess with her survivability. Raider — Debatably Nightreign’s tankiest character, he can easily handle enemy aggro, allowing for Duchess to set up good uses of her Restage ability. Wylder — Wylder’s character skill allows him to grapple enemies to him. In the early stages of an expedition, Duchess is at her weakest. A good Wylder can help mitigate enemy aggro by yanking them away from her. Duchess — Having multiple Duchess players is not as good as the other picks; however, there is an unusual synergy with her Restage character skill. Since it applies to allies’ damage as well as her own, multiple Duchess players can rapidly apply status ailments like Bleed and demolish bosses. Best weapons for Duchess in Elden Ring Nightreign Hands down, the best weapon for Duchess is a dagger, as Duchess prefers to build up status ailments as quickly as possible. To that end, it’s best to equip her with weapons that have fast move sets. That said, even though she’s a dextrous character, she also has good intelligence scaling. Staves are extremely underrated on Duchess. Since there’s no equipment load or stat requirements outside of levels, there is simply no reason not to have at least one good staff member with you at all times. Below are some of the best weapons for Duchess: Crystal Knife Reduvia Wakizashi Moonveil Rivers of Blood Meteoric Ore Blade Horned Bow Carian Regal Scepter Best talismans for Duchess in Elden Ring Nightreign It isn’t often you’ll run into a talisman during an expedition, but if you get lucky, they can completely dial up the effectiveness of any character. Be on the lookout for scarabs, a returning enemy from Elden Ring, who tend to drop them. The best talismans for the Duchess are the following: Millicent’s Prosthesis — Boosts attack power with successive attacks. Twinblade Talisman — Boosts the power of chain attack finishers. Lord of Blood’s Exultation — Boosts attack power when blood loss is in the vicinity. Depending on your playstyle, you might even prefer to use talismans that increase spell casting power, such as Graven-School Talisman or Radagon Icon. While it may not be her best option, it is still viable, and being able to adapt on the fly is the best skill you could have in Elden Ring Nightreign. For more Elden Ring Nightreign guides, here’s a list of all classes, the best class to pick first, how to unlock the Revenant, and the best early Recluse build.
    0 التعليقات 0 المشاركات
  • 9 Best Cooling Mattresses of 2025, Tested by AD Editors

    While you can invest in high-quality sheets, the best cooling mattresses are a great foundation for a good night’s sleep. These beds are often equipped with proper air flow and temperature-regulating technologies that might just end the tossing-and-turning in the middle of the night, especially if you’re sleeping hot.To help along the way, AD editors and contributors set out to test the best in their homes. Cooling features our team kept an eye out for include everything from gel-infused foam to pocketed coils that help with motion isolation. We also looked at different mattress typesand considered a variety of firmness levels. Below are some of our favorites, many of which come with a lengthy trial period, solid warranties, and even white glove delivery. Take a peek at the best cooling mattress options to suit your needs.Our Top Picks for the Best Cooling Mattresses:Best Overall Cooling Mattress: Cocoon by Sealy The Chill Mattress, A Smart Option: Sleep Number i8 Mattress, Best Hybrid Mattress: Saatva Latex Hybrid Mattress, The Budget Pick: Allswell Supreme Mattress, Best Memory Foam Mattress: GhostBed Luxe Foam Mattress, Browse by CategoryFor consistency, all of the prices in this list reflect queen sizes.The Cooling Mattress, OverallCocoon by Sealy Chill MattressUpsides & DownsidesUpsidesAffordableIncludes free Sealy Sleep BundleMemory foam layers adjust to sleep positionDownsidesLeans more firm than medium, according to our testerSpecsMattress type: Memory foamMaterials: Cooling cover, memory foam, cushioning foamFirmness: Medium-FirmWarranty: 100-night sleep trial, 10-year warranty“While my previous mattress was on the firmer side, it was not a memory foam or very cool mattress,” says contributor Cade Hiser in his review. “With the Cocoon Chill memory foam mattress, I do not wake up in the middle of the night like I used to, tossing and turning. I also stay sleeping at a comfortable temperature throughout the night.” The mattress clearly prioritizes body temperature control—hence the name. Hiser did note that since this memory foam mattress is a bit firmer, it may take a moment to adjust if you’re used to purely soft beds. The mattress comes in a box and is ready to be rolled out.A Smart OptionSleep Number i8 Smart BedUpsides & DownsidesUpsidesPressure-relieving supportCeramic gel to release excess heatAdjustable firmness levels for different sleepersDownsidesDifficult to move and requires unplugging the pump that needs to be reset via the appSpecsMattress type: Smart bedMaterials: CertiPUR-US certified foam, ceramic gel layerFirmness: AdjustableWarranty: 100-night trial; 15-year limited warrantySleep Number mattresses are lauded for their adjustable nature, and the i8 is a smart bed that also happens to help keep you from overheating thanks to its ceramic gel layer. The Responsive Air feature amps up the sleep quality by responding to movement throughout the night. “This mattress is exactly the firmness that I want on one side and exactly how my husband prefers on the other,” says Lisa Aiken, the senior vice president of commerce at Condé Nast. “It is easily changed and adjusted on the App, which links your phone to the bed via Bluetooth so you can adjust at any time to suit your mood.”Aiken was also impressed by the “exceptional customer service and delivery experience,” which not only included a smooth process that included the mattress setup, but also assistance with explaining how to use the Sleep Number App and all its capabilities so she could get the most out of her sleep experience.Best Hybrid MattressSaatva Latex Hybrid MattressUpsides & DownsidesUpsidesTemperature regulatingMinimal motion transferIdeal for those with back painBotanic antimicrobial treatmentOld mattress and box spring removal is included in deliveryDownsidesDoesn’t ship in a boxSpecsMattress type: Latex hybridMaterials: Natural latex foam layer with vented airflow channels, individually wrapped coilsFirmness: Medium-firm, buoyant feelWarranty: 365-night home trial; lifetime warrantyGlobal editorial director and US editor in chief Amy Astley loves quite a few things about her Saatva mattress—the five ergonomic zones for support and the bed’s pressure-relieving qualities—and does not overlook the cooling component. The mattress is hand-tufted, hypoallergenic, and made of organic natural latex with organic cotton and New Zealand wool covers to promote cooler sleep. “both sleep warm and appreciate the vented airflow channels, which allow for circulation and breathability,” she says. “ When my husband shifts, I cannot feel the bed moving–heaven. Ultimately, we are both so happy to tuck into this bed and don’t really want to get out of it in the morning.”Astley refers to the Saatva as “mattress gold” because of its comfort, body heat regulation, and the ability to make her lower back pain disappear. She notes that it is firm, but not rock hard, and is suitable for back sleepersand side-sleepers such as herself.The Budget PickAllswell Supreme Cooling Hybrid MattressUpsides & DownsidesUpsidesCustomizable to fit a variety of mattress foundationsEasy setupBreathable top layerDownsidesThe delivery process was not smooth for our testerShorter mattress return window than competitorsSpecsMattress type: HybridMaterials: Six layers including copper-infused memory foam, high-density support foam, and pocketed spring coils for cooling comfortFirmness: MediumWarranty: 90-day returns; 10-year warrantyAt 14 inches thick, the Allswell Supreme Mattress certainly makes a statement, but it’s the copper foam layer that contours and has cooling properties that make it stand out. “I feel supported yet super comfortable, and, as promised, it keeps me very cool,” says contributor Rebecca Grambone in her review. “Their cooling technology actively draws and releases excess heat away from your body.” The 1.5-inch quilted top further ensures that you will sleep cool.Note that Allswell skews on the firmer side and if you’re not used to this, an adjustment period might be in store. Grambone did like that the company prizes customization for different beds, such as those with a box spring, a flat platform bed frame, an adjustable bed frame, or a slatted frame.Best Memory Foam MattressGhostBed Luxe Foam MattressUpsides & DownsidesUpsidesCombination between graphite- and gel-infused memory foamPressure reliefSoft quilted coverDownsidesIncompatible with box springWeight limit 750 poundsSpecsMattress type: Memory foamMaterials: Cooling gel memory foamFirmness: Medium-firmWarranty: 101-night sleep trial; 25-year warrantyThe tagline for GhostBed luxe is “The coolest bed in the world.” And while we haven’t tested every mattress there is, we are noting that it has patented Ghost Ice technologythat absorbs and redistributes body heat and promises that the quilted cover has five times the cooling power.Tester Diane Dragen, the global content strategy and operations director at AD, who also reviewed the GhostBed luxe in our best mattresses for side sleepers guide, discovered the gel memory foam delivered on its promises to gently soothe her into sleep. “It’s very soothing and meditative, and it does feel like a luxury experience,” Dragen explains. Sleeping on this bed is a “very pleasant feeling,” especially because her vulnerable hip, shoulder, and back areas are lightly cradled as if she is slumbering on a beach.More AD-Approved Cooling MattressesCariloha Resort Bamboo MattressUpsides & DownsidesUpsidesSide wedge supportsWhite glove serviceSustainableDownsides72-hour decompression periodSpecsMattress type: Memory foamMaterials: Gel-infused memory foamFirmness: MediumWarranty: 100-night sleep trial; 10-year warrantyCariloha bamboo mattress has Resort in its name because you’re supposed to feel like you’re on vacation when you sleep. “The mattress is constructed from bamboo memory foam with five distinct layers that adapt and mold to your body shape, resulting in a sleep experience that is both very supportive and pleasantly soft,” Aiken says. Additionally, the mattress has a moisture-wicking feature in its Flex-Flow Base Foam that promises to improve airflow and keep you 3 degrees Fahrenheit cooler. The removable and washable cover is also made with bamboo, which is something we know and love when it comes to cooling bed sheets.Aiken also highlights that the side wedge supports “contribute to a feeling of a wider sleep surfacebut also provide excellent additional structure and reinforcement along the edges and even in the center of the mattress.”Brentwood Home Oceano Luxury Hybrid MattressUpsides & DownsidesUpsidesComes with free pillows and sleep masksIncludes GOTS-certified organic wool and cottonBioFoam cooling gelDownsidesPriceySpecsMattress type: HybridMaterials: Cooling gel with BioFoam and coilsFirmness: Medium-softWarranty: 365-night sleep trial, 25-year warrantyBrentwood Home Oceano hybrid mattress is what you might dream about if you sleep hot. It is constructed of nine layers, including cooling gel made with BioFoam. The GOTS-certified organic wool and cotton further add to the breathability. “This mattress feels so luxurious after a long day running around NYC,” says contributor Nick Mafi in his review of the best mattress brands. “This mattress has that perfectly deluxe feel and best-in-class support. I love sleeping on this mattress, but it’s also perfect for reading and editing manuscripts, working on a laptop, and lounging around watching TV with friends.”Then there is the support. With nearly 2,700 coils, you will never feel as if you’re sinking into some sort of abyss. Compared to other mattresses on the list, this one does not have a firm feel. In fact, Mafi was initially wary of the 4.5 out of 10 firmness scale. “I’ve never considered myself someone who loves a soft bed, but boy was I wrong!” he explains. “I love this mattress. It’s squishy, but still supportive. Perhaps it's the Air Luxe foam, which helps with pressure relief, at play.Amerisleep AS3 MattressUpsides & DownsidesUpsidesEco-friendlySmooth delivery processEasy setupNo off-gassingDownsidesNo contact deliverySpecsMattress type: Memory foamMaterials: Plant-based memory foam with four layersFirmness: Medium-SoftWarranty: 100-night sleep trial, 20-year warrantyThe AS3 is Amerisleep’s best-selling mattress. You can sleep cooler in part because it is made with a plant-based memory foam with an open cell design called Bio-Pur, and it also incorporates HIVE technology to amp up the airflow. To top it off, it comes with a scientifically engineered Refresh cover that uses minerals to manage body heat and is said to keep you 7 degrees cooler than a polyester cover. When it comes to plushness, the mattress is “definitely on the softer side,” says tester Rachel Logie, senior analytics manager. “The memory foam bounces back faster than most memory foam mattresses I’ve tried, so you don’t get that ‘stuck’ feeling.”Puffy Cloud MattressUpsides & DownsidesUpsidesArrives in a box, easy transportComes with free pillows and sleep masksOur tester says it has “cloud-like” comfortDownsides:The bed is not sustainably made like some others in our listSpecsMattress type: Memory foamMaterials: 6-layer memory foamFirmness: Medium-firmWarranty: 101-night sleep trial; Lifetime warrantyThe Puffy Cloud is meant to feel like bliss, and it lives up to its name not only in the comfort category but also because the six-layer memory foam mattress incorporates cooling technology that allows the air to circulate, so you never feel like your torso is a prisoner of night sweats. “The light, supportive cradling of the foam layers is comforting and cool,” says Dragan. “It really is cloud-like! Unlike other mattresses I’ve tested, your body doesn’t sink into the layers of foam as much as it rests lightly on the surface.”
    #best #cooling #mattresses #tested #editors
    9 Best Cooling Mattresses of 2025, Tested by AD Editors
    While you can invest in high-quality sheets, the best cooling mattresses are a great foundation for a good night’s sleep. These beds are often equipped with proper air flow and temperature-regulating technologies that might just end the tossing-and-turning in the middle of the night, especially if you’re sleeping hot.To help along the way, AD editors and contributors set out to test the best in their homes. Cooling features our team kept an eye out for include everything from gel-infused foam to pocketed coils that help with motion isolation. We also looked at different mattress typesand considered a variety of firmness levels. Below are some of our favorites, many of which come with a lengthy trial period, solid warranties, and even white glove delivery. Take a peek at the best cooling mattress options to suit your needs.Our Top Picks for the Best Cooling Mattresses:Best Overall Cooling Mattress: Cocoon by Sealy The Chill Mattress, A Smart Option: Sleep Number i8 Mattress, Best Hybrid Mattress: Saatva Latex Hybrid Mattress, The Budget Pick: Allswell Supreme Mattress, Best Memory Foam Mattress: GhostBed Luxe Foam Mattress, Browse by CategoryFor consistency, all of the prices in this list reflect queen sizes.The Cooling Mattress, OverallCocoon by Sealy Chill MattressUpsides & DownsidesUpsidesAffordableIncludes free Sealy Sleep BundleMemory foam layers adjust to sleep positionDownsidesLeans more firm than medium, according to our testerSpecsMattress type: Memory foamMaterials: Cooling cover, memory foam, cushioning foamFirmness: Medium-FirmWarranty: 100-night sleep trial, 10-year warranty“While my previous mattress was on the firmer side, it was not a memory foam or very cool mattress,” says contributor Cade Hiser in his review. “With the Cocoon Chill memory foam mattress, I do not wake up in the middle of the night like I used to, tossing and turning. I also stay sleeping at a comfortable temperature throughout the night.” The mattress clearly prioritizes body temperature control—hence the name. Hiser did note that since this memory foam mattress is a bit firmer, it may take a moment to adjust if you’re used to purely soft beds. The mattress comes in a box and is ready to be rolled out.A Smart OptionSleep Number i8 Smart BedUpsides & DownsidesUpsidesPressure-relieving supportCeramic gel to release excess heatAdjustable firmness levels for different sleepersDownsidesDifficult to move and requires unplugging the pump that needs to be reset via the appSpecsMattress type: Smart bedMaterials: CertiPUR-US certified foam, ceramic gel layerFirmness: AdjustableWarranty: 100-night trial; 15-year limited warrantySleep Number mattresses are lauded for their adjustable nature, and the i8 is a smart bed that also happens to help keep you from overheating thanks to its ceramic gel layer. The Responsive Air feature amps up the sleep quality by responding to movement throughout the night. “This mattress is exactly the firmness that I want on one side and exactly how my husband prefers on the other,” says Lisa Aiken, the senior vice president of commerce at Condé Nast. “It is easily changed and adjusted on the App, which links your phone to the bed via Bluetooth so you can adjust at any time to suit your mood.”Aiken was also impressed by the “exceptional customer service and delivery experience,” which not only included a smooth process that included the mattress setup, but also assistance with explaining how to use the Sleep Number App and all its capabilities so she could get the most out of her sleep experience.Best Hybrid MattressSaatva Latex Hybrid MattressUpsides & DownsidesUpsidesTemperature regulatingMinimal motion transferIdeal for those with back painBotanic antimicrobial treatmentOld mattress and box spring removal is included in deliveryDownsidesDoesn’t ship in a boxSpecsMattress type: Latex hybridMaterials: Natural latex foam layer with vented airflow channels, individually wrapped coilsFirmness: Medium-firm, buoyant feelWarranty: 365-night home trial; lifetime warrantyGlobal editorial director and US editor in chief Amy Astley loves quite a few things about her Saatva mattress—the five ergonomic zones for support and the bed’s pressure-relieving qualities—and does not overlook the cooling component. The mattress is hand-tufted, hypoallergenic, and made of organic natural latex with organic cotton and New Zealand wool covers to promote cooler sleep. “both sleep warm and appreciate the vented airflow channels, which allow for circulation and breathability,” she says. “ When my husband shifts, I cannot feel the bed moving–heaven. Ultimately, we are both so happy to tuck into this bed and don’t really want to get out of it in the morning.”Astley refers to the Saatva as “mattress gold” because of its comfort, body heat regulation, and the ability to make her lower back pain disappear. She notes that it is firm, but not rock hard, and is suitable for back sleepersand side-sleepers such as herself.The Budget PickAllswell Supreme Cooling Hybrid MattressUpsides & DownsidesUpsidesCustomizable to fit a variety of mattress foundationsEasy setupBreathable top layerDownsidesThe delivery process was not smooth for our testerShorter mattress return window than competitorsSpecsMattress type: HybridMaterials: Six layers including copper-infused memory foam, high-density support foam, and pocketed spring coils for cooling comfortFirmness: MediumWarranty: 90-day returns; 10-year warrantyAt 14 inches thick, the Allswell Supreme Mattress certainly makes a statement, but it’s the copper foam layer that contours and has cooling properties that make it stand out. “I feel supported yet super comfortable, and, as promised, it keeps me very cool,” says contributor Rebecca Grambone in her review. “Their cooling technology actively draws and releases excess heat away from your body.” The 1.5-inch quilted top further ensures that you will sleep cool.Note that Allswell skews on the firmer side and if you’re not used to this, an adjustment period might be in store. Grambone did like that the company prizes customization for different beds, such as those with a box spring, a flat platform bed frame, an adjustable bed frame, or a slatted frame.Best Memory Foam MattressGhostBed Luxe Foam MattressUpsides & DownsidesUpsidesCombination between graphite- and gel-infused memory foamPressure reliefSoft quilted coverDownsidesIncompatible with box springWeight limit 750 poundsSpecsMattress type: Memory foamMaterials: Cooling gel memory foamFirmness: Medium-firmWarranty: 101-night sleep trial; 25-year warrantyThe tagline for GhostBed luxe is “The coolest bed in the world.” And while we haven’t tested every mattress there is, we are noting that it has patented Ghost Ice technologythat absorbs and redistributes body heat and promises that the quilted cover has five times the cooling power.Tester Diane Dragen, the global content strategy and operations director at AD, who also reviewed the GhostBed luxe in our best mattresses for side sleepers guide, discovered the gel memory foam delivered on its promises to gently soothe her into sleep. “It’s very soothing and meditative, and it does feel like a luxury experience,” Dragen explains. Sleeping on this bed is a “very pleasant feeling,” especially because her vulnerable hip, shoulder, and back areas are lightly cradled as if she is slumbering on a beach.More AD-Approved Cooling MattressesCariloha Resort Bamboo MattressUpsides & DownsidesUpsidesSide wedge supportsWhite glove serviceSustainableDownsides72-hour decompression periodSpecsMattress type: Memory foamMaterials: Gel-infused memory foamFirmness: MediumWarranty: 100-night sleep trial; 10-year warrantyCariloha bamboo mattress has Resort in its name because you’re supposed to feel like you’re on vacation when you sleep. “The mattress is constructed from bamboo memory foam with five distinct layers that adapt and mold to your body shape, resulting in a sleep experience that is both very supportive and pleasantly soft,” Aiken says. Additionally, the mattress has a moisture-wicking feature in its Flex-Flow Base Foam that promises to improve airflow and keep you 3 degrees Fahrenheit cooler. The removable and washable cover is also made with bamboo, which is something we know and love when it comes to cooling bed sheets.Aiken also highlights that the side wedge supports “contribute to a feeling of a wider sleep surfacebut also provide excellent additional structure and reinforcement along the edges and even in the center of the mattress.”Brentwood Home Oceano Luxury Hybrid MattressUpsides & DownsidesUpsidesComes with free pillows and sleep masksIncludes GOTS-certified organic wool and cottonBioFoam cooling gelDownsidesPriceySpecsMattress type: HybridMaterials: Cooling gel with BioFoam and coilsFirmness: Medium-softWarranty: 365-night sleep trial, 25-year warrantyBrentwood Home Oceano hybrid mattress is what you might dream about if you sleep hot. It is constructed of nine layers, including cooling gel made with BioFoam. The GOTS-certified organic wool and cotton further add to the breathability. “This mattress feels so luxurious after a long day running around NYC,” says contributor Nick Mafi in his review of the best mattress brands. “This mattress has that perfectly deluxe feel and best-in-class support. I love sleeping on this mattress, but it’s also perfect for reading and editing manuscripts, working on a laptop, and lounging around watching TV with friends.”Then there is the support. With nearly 2,700 coils, you will never feel as if you’re sinking into some sort of abyss. Compared to other mattresses on the list, this one does not have a firm feel. In fact, Mafi was initially wary of the 4.5 out of 10 firmness scale. “I’ve never considered myself someone who loves a soft bed, but boy was I wrong!” he explains. “I love this mattress. It’s squishy, but still supportive. Perhaps it's the Air Luxe foam, which helps with pressure relief, at play.Amerisleep AS3 MattressUpsides & DownsidesUpsidesEco-friendlySmooth delivery processEasy setupNo off-gassingDownsidesNo contact deliverySpecsMattress type: Memory foamMaterials: Plant-based memory foam with four layersFirmness: Medium-SoftWarranty: 100-night sleep trial, 20-year warrantyThe AS3 is Amerisleep’s best-selling mattress. You can sleep cooler in part because it is made with a plant-based memory foam with an open cell design called Bio-Pur, and it also incorporates HIVE technology to amp up the airflow. To top it off, it comes with a scientifically engineered Refresh cover that uses minerals to manage body heat and is said to keep you 7 degrees cooler than a polyester cover. When it comes to plushness, the mattress is “definitely on the softer side,” says tester Rachel Logie, senior analytics manager. “The memory foam bounces back faster than most memory foam mattresses I’ve tried, so you don’t get that ‘stuck’ feeling.”Puffy Cloud MattressUpsides & DownsidesUpsidesArrives in a box, easy transportComes with free pillows and sleep masksOur tester says it has “cloud-like” comfortDownsides:The bed is not sustainably made like some others in our listSpecsMattress type: Memory foamMaterials: 6-layer memory foamFirmness: Medium-firmWarranty: 101-night sleep trial; Lifetime warrantyThe Puffy Cloud is meant to feel like bliss, and it lives up to its name not only in the comfort category but also because the six-layer memory foam mattress incorporates cooling technology that allows the air to circulate, so you never feel like your torso is a prisoner of night sweats. “The light, supportive cradling of the foam layers is comforting and cool,” says Dragan. “It really is cloud-like! Unlike other mattresses I’ve tested, your body doesn’t sink into the layers of foam as much as it rests lightly on the surface.” #best #cooling #mattresses #tested #editors
    WWW.ARCHITECTURALDIGEST.COM
    9 Best Cooling Mattresses of 2025, Tested by AD Editors
    While you can invest in high-quality sheets, the best cooling mattresses are a great foundation for a good night’s sleep. These beds are often equipped with proper air flow and temperature-regulating technologies that might just end the tossing-and-turning in the middle of the night, especially if you’re sleeping hot.To help along the way, AD editors and contributors set out to test the best in their homes. Cooling features our team kept an eye out for include everything from gel-infused foam to pocketed coils that help with motion isolation. We also looked at different mattress types (latex, hybrid, memory foam) and considered a variety of firmness levels. Below are some of our favorites, many of which come with a lengthy trial period, solid warranties, and even white glove delivery. Take a peek at the best cooling mattress options to suit your needs.Our Top Picks for the Best Cooling Mattresses:Best Overall Cooling Mattress: Cocoon by Sealy The Chill Mattress, $1,389 $899A Smart Option: Sleep Number i8 Mattress, $3,999 $2,799Best Hybrid Mattress: Saatva Latex Hybrid Mattress, $2,499 $2,199The Budget Pick: Allswell Supreme Mattress, $487Best Memory Foam Mattress: GhostBed Luxe Foam Mattress, $1,499Browse by CategoryFor consistency, all of the prices in this list reflect queen sizes.The Cooling Mattress, OverallCocoon by Sealy Chill MattressUpsides & DownsidesUpsidesAffordableIncludes free Sealy Sleep Bundle (up to $199 value)Memory foam layers adjust to sleep positionDownsidesLeans more firm than medium, according to our testerSpecsMattress type: Memory foamMaterials: Cooling cover, memory foam, cushioning foamFirmness: Medium-FirmWarranty: 100-night sleep trial, 10-year warranty“While my previous mattress was on the firmer side, it was not a memory foam or very cool mattress,” says contributor Cade Hiser in his review. “With the Cocoon Chill memory foam mattress, I do not wake up in the middle of the night like I used to, tossing and turning. I also stay sleeping at a comfortable temperature throughout the night.” The mattress clearly prioritizes body temperature control—hence the name. Hiser did note that since this memory foam mattress is a bit firmer, it may take a moment to adjust if you’re used to purely soft beds. The mattress comes in a box and is ready to be rolled out.A Smart OptionSleep Number i8 Smart BedUpsides & DownsidesUpsidesPressure-relieving supportCeramic gel to release excess heatAdjustable firmness levels for different sleepersDownsidesDifficult to move and requires unplugging the pump that needs to be reset via the appSpecsMattress type: Smart bedMaterials: CertiPUR-US certified foam, ceramic gel layerFirmness: AdjustableWarranty: 100-night trial; 15-year limited warrantySleep Number mattresses are lauded for their adjustable nature, and the i8 is a smart bed that also happens to help keep you from overheating thanks to its ceramic gel layer. The Responsive Air feature amps up the sleep quality by responding to movement throughout the night. “This mattress is exactly the firmness that I want on one side and exactly how my husband prefers on the other,” says Lisa Aiken, the senior vice president of commerce at Condé Nast. “It is easily changed and adjusted on the App, which links your phone to the bed via Bluetooth so you can adjust at any time to suit your mood.”Aiken was also impressed by the “exceptional customer service and delivery experience,” which not only included a smooth process that included the mattress setup, but also assistance with explaining how to use the Sleep Number App and all its capabilities so she could get the most out of her sleep experience.Best Hybrid MattressSaatva Latex Hybrid MattressUpsides & DownsidesUpsidesTemperature regulatingMinimal motion transferIdeal for those with back painBotanic antimicrobial treatmentOld mattress and box spring removal is included in deliveryDownsidesDoesn’t ship in a boxSpecsMattress type: Latex hybridMaterials: Natural latex foam layer with vented airflow channels, individually wrapped coilsFirmness: Medium-firm, buoyant feelWarranty: 365-night home trial; lifetime warrantyGlobal editorial director and US editor in chief Amy Astley loves quite a few things about her Saatva mattress—the five ergonomic zones for support and the bed’s pressure-relieving qualities—and does not overlook the cooling component. The mattress is hand-tufted, hypoallergenic, and made of organic natural latex with organic cotton and New Zealand wool covers to promote cooler sleep. “[My husband and I] both sleep warm and appreciate the vented airflow channels, which allow for circulation and breathability,” she says. “ When my husband shifts, I cannot feel the bed moving–heaven. Ultimately, we are both so happy to tuck into this bed and don’t really want to get out of it in the morning.”Astley refers to the Saatva as “mattress gold” because of its comfort, body heat regulation, and the ability to make her lower back pain disappear. She notes that it is firm, but not rock hard, and is suitable for back sleepers (as vetted by her husband) and side-sleepers such as herself.The Budget PickAllswell Supreme Cooling Hybrid MattressUpsides & DownsidesUpsidesCustomizable to fit a variety of mattress foundationsEasy setupBreathable top layerDownsidesThe delivery process was not smooth for our testerShorter mattress return window than competitorsSpecsMattress type: HybridMaterials: Six layers including copper-infused memory foam, high-density support foam, and pocketed spring coils for cooling comfortFirmness: MediumWarranty: 90-day returns; 10-year warrantyAt 14 inches thick, the Allswell Supreme Mattress certainly makes a statement, but it’s the copper foam layer that contours and has cooling properties that make it stand out. “I feel supported yet super comfortable, and, as promised, it keeps me very cool,” says contributor Rebecca Grambone in her review. “Their cooling technology actively draws and releases excess heat away from your body.” The 1.5-inch quilted top further ensures that you will sleep cool.Note that Allswell skews on the firmer side and if you’re not used to this, an adjustment period might be in store. Grambone did like that the company prizes customization for different beds, such as those with a box spring, a flat platform bed frame, an adjustable bed frame, or a slatted frame.Best Memory Foam MattressGhostBed Luxe Foam MattressUpsides & DownsidesUpsidesCombination between graphite- and gel-infused memory foamPressure reliefSoft quilted coverDownsidesIncompatible with box springWeight limit 750 poundsSpecsMattress type: Memory foamMaterials: Cooling gel memory foamFirmness: Medium-firmWarranty: 101-night sleep trial; 25-year warrantyThe tagline for GhostBed luxe is “The coolest bed in the world.” And while we haven’t tested every mattress there is, we are noting that it has patented Ghost Ice technology (a combination of graphite- and gel-infused memory foam) that absorbs and redistributes body heat and promises that the quilted cover has five times the cooling power.Tester Diane Dragen, the global content strategy and operations director at AD, who also reviewed the GhostBed luxe in our best mattresses for side sleepers guide, discovered the gel memory foam delivered on its promises to gently soothe her into sleep. “It’s very soothing and meditative, and it does feel like a luxury experience,” Dragen explains. Sleeping on this bed is a “very pleasant feeling,” especially because her vulnerable hip, shoulder, and back areas are lightly cradled as if she is slumbering on a beach.More AD-Approved Cooling MattressesCariloha Resort Bamboo MattressUpsides & DownsidesUpsidesSide wedge supportsWhite glove serviceSustainableDownsides72-hour decompression periodSpecsMattress type: Memory foamMaterials: Gel-infused memory foamFirmness: MediumWarranty: 100-night sleep trial; 10-year warrantyCariloha bamboo mattress has Resort in its name because you’re supposed to feel like you’re on vacation when you sleep. “The mattress is constructed from bamboo memory foam with five distinct layers that adapt and mold to your body shape, resulting in a sleep experience that is both very supportive and pleasantly soft,” Aiken says. Additionally, the mattress has a moisture-wicking feature in its Flex-Flow Base Foam that promises to improve airflow and keep you 3 degrees Fahrenheit cooler. The removable and washable cover is also made with bamboo, which is something we know and love when it comes to cooling bed sheets.Aiken also highlights that the side wedge supports “contribute to a feeling of a wider sleep surface (Cariloha claims a 25% increase) but also provide excellent additional structure and reinforcement along the edges and even in the center of the mattress.”Brentwood Home Oceano Luxury Hybrid MattressUpsides & DownsidesUpsidesComes with free pillows and sleep masksIncludes GOTS-certified organic wool and cottonBioFoam cooling gelDownsidesPriceySpecsMattress type: HybridMaterials: Cooling gel with BioFoam and coilsFirmness: Medium-softWarranty: 365-night sleep trial, 25-year warrantyBrentwood Home Oceano hybrid mattress is what you might dream about if you sleep hot. It is constructed of nine layers, including cooling gel made with BioFoam. The GOTS-certified organic wool and cotton further add to the breathability. “This mattress feels so luxurious after a long day running around NYC,” says contributor Nick Mafi in his review of the best mattress brands. “This mattress has that perfectly deluxe feel and best-in-class support. I love sleeping on this mattress, but it’s also perfect for reading and editing manuscripts (my main activity!), working on a laptop, and lounging around watching TV with friends.”Then there is the support. With nearly 2,700 coils (1,722 micro-coils and 975 pocketed coils around the perimeter), you will never feel as if you’re sinking into some sort of abyss. Compared to other mattresses on the list, this one does not have a firm feel. In fact, Mafi was initially wary of the 4.5 out of 10 firmness scale. “I’ve never considered myself someone who loves a soft bed, but boy was I wrong!” he explains. “I love this mattress. It’s squishy, but still supportive. Perhaps it's the Air Luxe foam, which helps with pressure relief, at play.Amerisleep AS3 MattressUpsides & DownsidesUpsidesEco-friendlySmooth delivery processEasy setupNo off-gassingDownsidesNo contact deliverySpecsMattress type: Memory foamMaterials: Plant-based memory foam with four layersFirmness: Medium-SoftWarranty: 100-night sleep trial, 20-year warrantyThe AS3 is Amerisleep’s best-selling mattress. You can sleep cooler in part because it is made with a plant-based memory foam with an open cell design called Bio-Pur, and it also incorporates HIVE technology to amp up the airflow. To top it off, it comes with a scientifically engineered Refresh cover that uses minerals to manage body heat and is said to keep you 7 degrees cooler than a polyester cover. When it comes to plushness, the mattress is “definitely on the softer side,” says tester Rachel Logie, senior analytics manager. “The memory foam bounces back faster than most memory foam mattresses I’ve tried, so you don’t get that ‘stuck’ feeling.”Puffy Cloud MattressUpsides & DownsidesUpsidesArrives in a box, easy transportComes with free pillows and sleep masksOur tester says it has “cloud-like” comfortDownsides:The bed is not sustainably made like some others in our listSpecsMattress type: Memory foamMaterials: 6-layer memory foamFirmness: Medium-firmWarranty: 101-night sleep trial; Lifetime warrantyThe Puffy Cloud is meant to feel like bliss, and it lives up to its name not only in the comfort category but also because the six-layer memory foam mattress incorporates cooling technology that allows the air to circulate, so you never feel like your torso is a prisoner of night sweats. “The light, supportive cradling of the foam layers is comforting and cool,” says Dragan. “It really is cloud-like! Unlike other mattresses I’ve tested, your body doesn’t sink into the layers of foam as much as it rests lightly on the surface.”
    0 التعليقات 0 المشاركات
  • Worms Can Smell Death, and It Strangely Alters Their Fertility and Fitness

    Worms are decomposers. Many survive by breaking down dead things — dead bacteria, dead plants, dead animals, dead anything. So, they must be accustomed to the stench of death. Not so, a new study suggests — not when the dead organism is another worm.Published in Current Biology, the study states that C. elegans roundworms react adversely to the smell of a deceased counterpart. Not only does this smell invoke a behavioral response of corpse avoidance, but it also invokes a physiological response of increased short-term fertility and decreased long-term fitness and lifespan.“Caenorhabditis elegans prefers to avoid dead conspecifics,” or deceased members of the same species, the authors state in the study, with the worms reacting to death with a range of “aversion” and “survival” responses. Taken together, the results reveal a new signaling mechanism that’s available to worms and possibly other organisms, too, as a means of detecting and responding to death.Read More: These Fruit Flies Aged Faster After Seeing DeathWorms Signal and Detect DeathC. elegans roundworms aren’t the only small organisms that respond to the dead. Ants and bees dispose of the deceased from their colonies, for instance, while fruit flies avoid corpses. Death-exposed fruit flies even experience faster aging after seeing a deceased counterpart, and have shorter lifespans than those that have had no encounters with death. That these animals respond so strongly to the dead is widely documented. So, when the authors of the new study noticed C. elegans worms wriggle away from corpses, they saw the response as a chance to dig deeper into death signaling and detection. Indeed, while many species’ reactions to death are mediated mainly by sight, that certainly wasn’t the case for wiggling roundworms, which have no eyes and no sense of vision. “We felt this was quite a unique opportunity to start diving into what is happening mechanistically that enables C. elegans to detect a dead conscript,” said Matthias Truttmann, a senior study author and a physiologist at the University of Michigan, according to a press release.To determine how C. elegans worms detect the dead, Truttman and his team exposed the worms to conspecific corpses and to fluids taken from the deteriorating cells of those corpses. The worms responded to both with avoidance, moving away regardless of their age and sex, suggesting that the corpses and fluids carried similar signatures of death. These death cues also resulted in short-term increases in fertility, long-term decreases in fitness, and long-term decreases in lifespan. But what were those death cues, exactly, and how did the worms pick up on them?Sounding a Sensory AlarmTo figure out what those cues could be, the study authors recorded the activity in the worms’ sensory neurons as they encountered the corpses and fluids. The recordings revealed that AWB and ASH, two neurons that are responsible for making sense of olfactory stimuli, were activated when the corpses and fluids were present, indicating that the worms were smelling the signature of death.“The neurons we identified are well known to be involved in behavioral responses to a variety of environmental cues,” Truttmann said in the release. According to the study authors, the metabolites AMP and histidine were probably responsible for the signal of death that the C. elegans worms recognized. Though these metabolites are typically contained in living cells, they are released when living cells die and deteriorate — in this case, triggering the behavioral and physiological responses in C. elegans. “They also detect a couple of intracellular metabolites that are not typically found in the environment. If they are around, it indicates that a cell has died, popped open, and that something has gone wrong,” Truttmann said in the release.It is possible that cellular metabolites serve as a signal of death in other organisms, too, Truttmann said, as the release of metabolites from dying and disintegrating cells in one tissue can cause changes in other tissues in humans, for instance. Whether this signal sounds the alarm in other organisms is still uncertain. While further research is required to understand the role of cellular metabolites in detecting death across species, for now, it’s clear that death is a sensitive subject, even for worms like C. elegans.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Current Biology. Sam Walters is a journalist covering archaeology, paleontology, ecology, and evolution for Discover, along with an assortment of other topics. Before joining the Discover team as an assistant editor in 2022, Sam studied journalism at Northwestern University in Evanston, Illinois.
    #worms #can #smell #death #strangely
    Worms Can Smell Death, and It Strangely Alters Their Fertility and Fitness
    Worms are decomposers. Many survive by breaking down dead things — dead bacteria, dead plants, dead animals, dead anything. So, they must be accustomed to the stench of death. Not so, a new study suggests — not when the dead organism is another worm.Published in Current Biology, the study states that C. elegans roundworms react adversely to the smell of a deceased counterpart. Not only does this smell invoke a behavioral response of corpse avoidance, but it also invokes a physiological response of increased short-term fertility and decreased long-term fitness and lifespan.“Caenorhabditis elegans prefers to avoid dead conspecifics,” or deceased members of the same species, the authors state in the study, with the worms reacting to death with a range of “aversion” and “survival” responses. Taken together, the results reveal a new signaling mechanism that’s available to worms and possibly other organisms, too, as a means of detecting and responding to death.Read More: These Fruit Flies Aged Faster After Seeing DeathWorms Signal and Detect DeathC. elegans roundworms aren’t the only small organisms that respond to the dead. Ants and bees dispose of the deceased from their colonies, for instance, while fruit flies avoid corpses. Death-exposed fruit flies even experience faster aging after seeing a deceased counterpart, and have shorter lifespans than those that have had no encounters with death. That these animals respond so strongly to the dead is widely documented. So, when the authors of the new study noticed C. elegans worms wriggle away from corpses, they saw the response as a chance to dig deeper into death signaling and detection. Indeed, while many species’ reactions to death are mediated mainly by sight, that certainly wasn’t the case for wiggling roundworms, which have no eyes and no sense of vision. “We felt this was quite a unique opportunity to start diving into what is happening mechanistically that enables C. elegans to detect a dead conscript,” said Matthias Truttmann, a senior study author and a physiologist at the University of Michigan, according to a press release.To determine how C. elegans worms detect the dead, Truttman and his team exposed the worms to conspecific corpses and to fluids taken from the deteriorating cells of those corpses. The worms responded to both with avoidance, moving away regardless of their age and sex, suggesting that the corpses and fluids carried similar signatures of death. These death cues also resulted in short-term increases in fertility, long-term decreases in fitness, and long-term decreases in lifespan. But what were those death cues, exactly, and how did the worms pick up on them?Sounding a Sensory AlarmTo figure out what those cues could be, the study authors recorded the activity in the worms’ sensory neurons as they encountered the corpses and fluids. The recordings revealed that AWB and ASH, two neurons that are responsible for making sense of olfactory stimuli, were activated when the corpses and fluids were present, indicating that the worms were smelling the signature of death.“The neurons we identified are well known to be involved in behavioral responses to a variety of environmental cues,” Truttmann said in the release. According to the study authors, the metabolites AMP and histidine were probably responsible for the signal of death that the C. elegans worms recognized. Though these metabolites are typically contained in living cells, they are released when living cells die and deteriorate — in this case, triggering the behavioral and physiological responses in C. elegans. “They also detect a couple of intracellular metabolites that are not typically found in the environment. If they are around, it indicates that a cell has died, popped open, and that something has gone wrong,” Truttmann said in the release.It is possible that cellular metabolites serve as a signal of death in other organisms, too, Truttmann said, as the release of metabolites from dying and disintegrating cells in one tissue can cause changes in other tissues in humans, for instance. Whether this signal sounds the alarm in other organisms is still uncertain. While further research is required to understand the role of cellular metabolites in detecting death across species, for now, it’s clear that death is a sensitive subject, even for worms like C. elegans.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Current Biology. Sam Walters is a journalist covering archaeology, paleontology, ecology, and evolution for Discover, along with an assortment of other topics. Before joining the Discover team as an assistant editor in 2022, Sam studied journalism at Northwestern University in Evanston, Illinois. #worms #can #smell #death #strangely
    WWW.DISCOVERMAGAZINE.COM
    Worms Can Smell Death, and It Strangely Alters Their Fertility and Fitness
    Worms are decomposers. Many survive by breaking down dead things — dead bacteria, dead plants, dead animals, dead anything. So, they must be accustomed to the stench of death. Not so, a new study suggests — not when the dead organism is another worm.Published in Current Biology, the study states that C. elegans roundworms react adversely to the smell of a deceased counterpart. Not only does this smell invoke a behavioral response of corpse avoidance, but it also invokes a physiological response of increased short-term fertility and decreased long-term fitness and lifespan.“Caenorhabditis elegans prefers to avoid dead conspecifics,” or deceased members of the same species, the authors state in the study, with the worms reacting to death with a range of “aversion” and “survival” responses. Taken together, the results reveal a new signaling mechanism that’s available to worms and possibly other organisms, too, as a means of detecting and responding to death.Read More: These Fruit Flies Aged Faster After Seeing DeathWorms Signal and Detect DeathC. elegans roundworms aren’t the only small organisms that respond to the dead. Ants and bees dispose of the deceased from their colonies, for instance, while fruit flies avoid corpses (and shun flies that have seen corpses themselves). Death-exposed fruit flies even experience faster aging after seeing a deceased counterpart, and have shorter lifespans than those that have had no encounters with death. That these animals respond so strongly to the dead is widely documented. So, when the authors of the new study noticed C. elegans worms wriggle away from corpses, they saw the response as a chance to dig deeper into death signaling and detection. Indeed, while many species’ reactions to death are mediated mainly by sight, that certainly wasn’t the case for wiggling roundworms, which have no eyes and no sense of vision. “We felt this was quite a unique opportunity to start diving into what is happening mechanistically that enables C. elegans to detect a dead conscript,” said Matthias Truttmann, a senior study author and a physiologist at the University of Michigan, according to a press release.To determine how C. elegans worms detect the dead, Truttman and his team exposed the worms to conspecific corpses and to fluids taken from the deteriorating cells of those corpses. The worms responded to both with avoidance, moving away regardless of their age and sex, suggesting that the corpses and fluids carried similar signatures of death. These death cues also resulted in short-term increases in fertility, long-term decreases in fitness (represented by a reduced thrashing rate), and long-term decreases in lifespan. But what were those death cues, exactly, and how did the worms pick up on them?Sounding a Sensory AlarmTo figure out what those cues could be, the study authors recorded the activity in the worms’ sensory neurons as they encountered the corpses and fluids. The recordings revealed that AWB and ASH, two neurons that are responsible for making sense of olfactory stimuli, were activated when the corpses and fluids were present, indicating that the worms were smelling the signature of death.“The neurons we identified are well known to be involved in behavioral responses to a variety of environmental cues,” Truttmann said in the release. According to the study authors, the metabolites AMP and histidine were probably responsible for the signal of death that the C. elegans worms recognized. Though these metabolites are typically contained in living cells, they are released when living cells die and deteriorate — in this case, triggering the behavioral and physiological responses in C. elegans. “They also detect a couple of intracellular metabolites that are not typically found in the environment. If they are around, it indicates that a cell has died, popped open, and that something has gone wrong,” Truttmann said in the release.It is possible that cellular metabolites serve as a signal of death in other organisms, too, Truttmann said, as the release of metabolites from dying and disintegrating cells in one tissue can cause changes in other tissues in humans, for instance. Whether this signal sounds the alarm in other organisms is still uncertain. While further research is required to understand the role of cellular metabolites in detecting death across species, for now, it’s clear that death is a sensitive subject, even for worms like C. elegans.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Current Biology. Sam Walters is a journalist covering archaeology, paleontology, ecology, and evolution for Discover, along with an assortment of other topics. Before joining the Discover team as an assistant editor in 2022, Sam studied journalism at Northwestern University in Evanston, Illinois.
    0 التعليقات 0 المشاركات
  • New data confirms: There really is a planet squeezed in between two stars

    How'd that get there?

    New data confirms: There really is a planet squeezed in between two stars

    The planet may have formed from material transferred between the stars.

    John Timmer



    May 22, 2025 2:24 pm

    |

    5

    Credit:

    NASA/Goddard Space Flight Center

    Credit:

    NASA/Goddard Space Flight Center

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    While our Sun prefers to go solo, many other stars are parts of binary systems, with a pair of stars gravitationally bound to each other. In some cases, the stars are far enough apart that planets can form around each of them. But there are also plenty of tight binary systems, where the stars orbit each other at a radius that would place them both comfortably inside our Solar System. In these systems, exoplanets tend to be found at greater distances, in orbits that have them circling both stars.
    On Wednesday, scientists described a system that seems to be neither of the above. It is a tight binary system, with a heavy central star that's orbited by a white dwarf at a distance two to three times larger than Earth's orbit. The lone planet confirmed to be in the system is squeezed in between the two, orbiting at a distance similar to Earth's distance from the Sun. And, as an added bonus, the planet is orbiting backward relative to the white dwarf.
    Orbiting ν Octantis
    The exosolar system is termed ν Octantis, and its primary star is just a bit heavier than our Sun. It's orbited by a far dimmer companion that's roughly half of our Sun's mass, but which hasn't been characterized in detail until now. The companion's orbit relative to the central star is a bit lopsided, ranging from about two astronomical unitsat its closest approach to roughly three AU at its farthest. And, until yesterday, the exact nature of the companion star was not clear.
    The latter question was relatively easy to answer. Detailed imaging of the system in the near infrared should be able to resolve the two stars but was unable to pick up a second object with sufficient brightness. That eliminates any main sequence stars, leaving a white dwarf as the only likely answer. But that's not the only thing that's orbiting the central star of ν Octantis.

    Earlier studies of the system had suggested that there was also an exoplanet present in the system. But the properties of its orbit made little sense, in that nobody could seem to figure out a stable orbit that would be consistent with the observations. The only thing that was clear was that the most stable orbits appeared to require that the planet have a retrograde motion, meaning orbiting in the opposite direction to the companion star. ν Octantis definitely fell into the vast category of "more data is needed" questions.
    And more data is exactly what a small international team of scientists got, with nearly two years of additional observations using the HARPSinstrument in Chile. The data clearly confirmed the existence of a planet in a retrograde orbit and suggested that the plane of its orbit was 17° off from the plane formed by the orbits of the two stars. Unfortunately, modeling variations on this orbit through time indicated that 98 percent of them were unstable within 50 million years.
    So, the researchers tested a range of orbital properties that would keep everything in a single plane. This provided a solution where modeling variations on it led to 75 percent of the orbits being stable out past 50 million years. So, the researchers settle on this as the most likely description of the system.
    These orbits do have the planet in ν Octantis in a retrograde orbit, meaning it's moving in the opposite direction from the smaller star in the system. The orbit is roughly one AU, meaning its typical distance to the central star is similar to that of the Earth's distance to the Sun. But the orbit is somewhat squished, with one half of the orbit being significantly closer to the central star than the other.

    And, critically, the entire orbit is within the orbit of the smaller companion star. The gravitational forces of a tight binary should prevent any planets from forming within this space early in the system's history. So, how did the planet end up in such an unusual configuration?
    A confused past
    The fact that one of the stars present in ν Octantis is a white dwarf suggests some possible explanations. White dwarfs are formed by Sun-like stars that have advanced through a late helium-burning period that causes them to swell considerably, leaving the outer surface of the star weakly bound to the rest of its mass. At the distances within ν Octantis, that would allow considerable material to be drawn off the outer companion and pulled onto the surface of what's now the central star. The net result is a considerable mass transfer.
    This could have done one of two things to place a planet in the interior of the system. One is that the transferred material isn't likely to make an immediate dive onto the surface of the nearby star. If the process is slow enough, it could have produced a planet-forming disk for a brief period—long enough to produce a planet on the interior of the system.
    Alternatively, if there were planets orbiting exterior to both stars, the change in the mass distribution of the system could have potentially destabilized their orbits. That might be enough to cause interactions among the planets to send one of them spiraling inward, where it was eventually captured in the stable retrograde orbit we now find it.
    Either case, the authors emphasize, should be pretty rare, meaning we're unlikely to have imaged many other systems like this at this stage of our study of exoplanets. They do point to another tight binary, HD 59686, that appears to have a planet in a retrograde orbit. But, as with ν Octantis, the data isn't clear enough to rule out alternative configurations yet. So, once again, more data is needed.
    Nature, 2025. DOI: 10.1038/s41586-025-09006-x  .

    John Timmer
    Senior Science Editor

    John Timmer
    Senior Science Editor

    John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

    5 Comments
    #new #data #confirms #there #really
    New data confirms: There really is a planet squeezed in between two stars
    How'd that get there? New data confirms: There really is a planet squeezed in between two stars The planet may have formed from material transferred between the stars. John Timmer – May 22, 2025 2:24 pm | 5 Credit: NASA/Goddard Space Flight Center Credit: NASA/Goddard Space Flight Center Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more While our Sun prefers to go solo, many other stars are parts of binary systems, with a pair of stars gravitationally bound to each other. In some cases, the stars are far enough apart that planets can form around each of them. But there are also plenty of tight binary systems, where the stars orbit each other at a radius that would place them both comfortably inside our Solar System. In these systems, exoplanets tend to be found at greater distances, in orbits that have them circling both stars. On Wednesday, scientists described a system that seems to be neither of the above. It is a tight binary system, with a heavy central star that's orbited by a white dwarf at a distance two to three times larger than Earth's orbit. The lone planet confirmed to be in the system is squeezed in between the two, orbiting at a distance similar to Earth's distance from the Sun. And, as an added bonus, the planet is orbiting backward relative to the white dwarf. Orbiting ν Octantis The exosolar system is termed ν Octantis, and its primary star is just a bit heavier than our Sun. It's orbited by a far dimmer companion that's roughly half of our Sun's mass, but which hasn't been characterized in detail until now. The companion's orbit relative to the central star is a bit lopsided, ranging from about two astronomical unitsat its closest approach to roughly three AU at its farthest. And, until yesterday, the exact nature of the companion star was not clear. The latter question was relatively easy to answer. Detailed imaging of the system in the near infrared should be able to resolve the two stars but was unable to pick up a second object with sufficient brightness. That eliminates any main sequence stars, leaving a white dwarf as the only likely answer. But that's not the only thing that's orbiting the central star of ν Octantis. Earlier studies of the system had suggested that there was also an exoplanet present in the system. But the properties of its orbit made little sense, in that nobody could seem to figure out a stable orbit that would be consistent with the observations. The only thing that was clear was that the most stable orbits appeared to require that the planet have a retrograde motion, meaning orbiting in the opposite direction to the companion star. ν Octantis definitely fell into the vast category of "more data is needed" questions. And more data is exactly what a small international team of scientists got, with nearly two years of additional observations using the HARPSinstrument in Chile. The data clearly confirmed the existence of a planet in a retrograde orbit and suggested that the plane of its orbit was 17° off from the plane formed by the orbits of the two stars. Unfortunately, modeling variations on this orbit through time indicated that 98 percent of them were unstable within 50 million years. So, the researchers tested a range of orbital properties that would keep everything in a single plane. This provided a solution where modeling variations on it led to 75 percent of the orbits being stable out past 50 million years. So, the researchers settle on this as the most likely description of the system. These orbits do have the planet in ν Octantis in a retrograde orbit, meaning it's moving in the opposite direction from the smaller star in the system. The orbit is roughly one AU, meaning its typical distance to the central star is similar to that of the Earth's distance to the Sun. But the orbit is somewhat squished, with one half of the orbit being significantly closer to the central star than the other. And, critically, the entire orbit is within the orbit of the smaller companion star. The gravitational forces of a tight binary should prevent any planets from forming within this space early in the system's history. So, how did the planet end up in such an unusual configuration? A confused past The fact that one of the stars present in ν Octantis is a white dwarf suggests some possible explanations. White dwarfs are formed by Sun-like stars that have advanced through a late helium-burning period that causes them to swell considerably, leaving the outer surface of the star weakly bound to the rest of its mass. At the distances within ν Octantis, that would allow considerable material to be drawn off the outer companion and pulled onto the surface of what's now the central star. The net result is a considerable mass transfer. This could have done one of two things to place a planet in the interior of the system. One is that the transferred material isn't likely to make an immediate dive onto the surface of the nearby star. If the process is slow enough, it could have produced a planet-forming disk for a brief period—long enough to produce a planet on the interior of the system. Alternatively, if there were planets orbiting exterior to both stars, the change in the mass distribution of the system could have potentially destabilized their orbits. That might be enough to cause interactions among the planets to send one of them spiraling inward, where it was eventually captured in the stable retrograde orbit we now find it. Either case, the authors emphasize, should be pretty rare, meaning we're unlikely to have imaged many other systems like this at this stage of our study of exoplanets. They do point to another tight binary, HD 59686, that appears to have a planet in a retrograde orbit. But, as with ν Octantis, the data isn't clear enough to rule out alternative configurations yet. So, once again, more data is needed. Nature, 2025. DOI: 10.1038/s41586-025-09006-x  . John Timmer Senior Science Editor John Timmer Senior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 5 Comments #new #data #confirms #there #really
    ARSTECHNICA.COM
    New data confirms: There really is a planet squeezed in between two stars
    How'd that get there? New data confirms: There really is a planet squeezed in between two stars The planet may have formed from material transferred between the stars. John Timmer – May 22, 2025 2:24 pm | 5 Credit: NASA/Goddard Space Flight Center Credit: NASA/Goddard Space Flight Center Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more While our Sun prefers to go solo, many other stars are parts of binary systems, with a pair of stars gravitationally bound to each other. In some cases, the stars are far enough apart that planets can form around each of them. But there are also plenty of tight binary systems, where the stars orbit each other at a radius that would place them both comfortably inside our Solar System. In these systems, exoplanets tend to be found at greater distances, in orbits that have them circling both stars. On Wednesday, scientists described a system that seems to be neither of the above. It is a tight binary system, with a heavy central star that's orbited by a white dwarf at a distance two to three times larger than Earth's orbit. The lone planet confirmed to be in the system is squeezed in between the two, orbiting at a distance similar to Earth's distance from the Sun. And, as an added bonus, the planet is orbiting backward relative to the white dwarf. Orbiting ν Octantis The exosolar system is termed ν Octantis (or Nu Octantis), and its primary star is just a bit heavier than our Sun (1.6 solar masses). It's orbited by a far dimmer companion that's roughly half of our Sun's mass, but which hasn't been characterized in detail until now. The companion's orbit relative to the central star is a bit lopsided, ranging from about two astronomical units (AU, the typical Earth-Sun distance) at its closest approach to roughly three AU at its farthest. And, until yesterday, the exact nature of the companion star was not clear. The latter question was relatively easy to answer. Detailed imaging of the system in the near infrared should be able to resolve the two stars but was unable to pick up a second object with sufficient brightness. That eliminates any main sequence stars, leaving a white dwarf as the only likely answer. But that's not the only thing that's orbiting the central star of ν Octantis. Earlier studies of the system had suggested that there was also an exoplanet present in the system. But the properties of its orbit made little sense, in that nobody could seem to figure out a stable orbit that would be consistent with the observations. The only thing that was clear was that the most stable orbits appeared to require that the planet have a retrograde motion, meaning orbiting in the opposite direction to the companion star. ν Octantis definitely fell into the vast category of "more data is needed" questions. And more data is exactly what a small international team of scientists got, with nearly two years of additional observations using the HARPS (High Accuracy Radial Velocity Planet Searcher) instrument in Chile. The data clearly confirmed the existence of a planet in a retrograde orbit and suggested that the plane of its orbit was 17° off from the plane formed by the orbits of the two stars. Unfortunately, modeling variations on this orbit through time indicated that 98 percent of them were unstable within 50 million years. So, the researchers tested a range of orbital properties that would keep everything in a single plane. This provided a solution where modeling variations on it led to 75 percent of the orbits being stable out past 50 million years. So, the researchers settle on this as the most likely description of the system. These orbits do have the planet in ν Octantis in a retrograde orbit, meaning it's moving in the opposite direction from the smaller star in the system. The orbit is roughly one AU, meaning its typical distance to the central star is similar to that of the Earth's distance to the Sun. But the orbit is somewhat squished, with one half of the orbit being significantly closer to the central star than the other. And, critically, the entire orbit is within the orbit of the smaller companion star. The gravitational forces of a tight binary should prevent any planets from forming within this space early in the system's history. So, how did the planet end up in such an unusual configuration? A confused past The fact that one of the stars present in ν Octantis is a white dwarf suggests some possible explanations. White dwarfs are formed by Sun-like stars that have advanced through a late helium-burning period that causes them to swell considerably, leaving the outer surface of the star weakly bound to the rest of its mass. At the distances within ν Octantis, that would allow considerable material to be drawn off the outer companion and pulled onto the surface of what's now the central star. The net result is a considerable mass transfer. This could have done one of two things to place a planet in the interior of the system. One is that the transferred material isn't likely to make an immediate dive onto the surface of the nearby star. If the process is slow enough, it could have produced a planet-forming disk for a brief period—long enough to produce a planet on the interior of the system. Alternatively, if there were planets orbiting exterior to both stars, the change in the mass distribution of the system could have potentially destabilized their orbits. That might be enough to cause interactions among the planets to send one of them spiraling inward, where it was eventually captured in the stable retrograde orbit we now find it. Either case, the authors emphasize, should be pretty rare, meaning we're unlikely to have imaged many other systems like this at this stage of our study of exoplanets. They do point to another tight binary, HD 59686, that appears to have a planet in a retrograde orbit. But, as with ν Octantis, the data isn't clear enough to rule out alternative configurations yet. So, once again, more data is needed. Nature, 2025. DOI: 10.1038/s41586-025-09006-x  (About DOIs). John Timmer Senior Science Editor John Timmer Senior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 5 Comments
    0 التعليقات 0 المشاركات
  • Monster Train 2 review – off the rails Slay The Spire

    Monster Train 2 – running on time is the least of your problemsOne of the few deck-building roguelites to challenge Slay The Spire gets an impressive sequel that may be the most fun you can have on a locomotive.
    Roguelike deck builders are having a moment. A search for them on Steam will net you an astounding 861 results, making it a category that’s quite a bit more populous than you might imagine. Despite the high number of matches though, it’s a genre that’s been made famous primarily by just two games: Slay The Spire and Balatro.
    The latter is regularly cited as one of the best games of 2024 but it’s the former whose content and style is closest to Monster Train, which was originally released in 2020. It was a game about defending the frozen wastes of Hell against the invading forces of Heaven. In its sequel, Heaven and Hell are forced to unite to face the Titans, a new threat that could lead to the destruction of both realms.
    None of that’s especially relevant to the gameplay, which once again takes place onboard a quadruple-decker train. The turn-based battles are waged across the bottom three floors, with the train’s penthouse reserved for the pyre, the burning heart of your train, which in a mechanic borrowed from tower defence games is effectively the train’s power bar. Your job is to stop invaders reaching the pyre, because if they do and its health gets down to zero, it’s game over.
    In the original that often meant stacking your third floor with the strongest troops you had available. The sequel prefers you to mount a defence across all three floors and to encourage that, there are now room-level upgrades available, that for example will increase valour – the stat that equates to armour – to all troops, or reduce the cost of magic, making different floors more suitable for certain troop types.
    This adds a fresh layer of tactics and feeds into the meta game of deck building. There are now a total of 10 different clans to choose from, with each run featuring a main and support clan, both of whose cards you’ll have available as you play. Completing runs earns experience for the clans you’re using and as each one levels up, you’ll slowly gain access to more of their cards. Naturally, the game tends to gate the more powerful ones behind those higher experience levels.
    All of this reinforces the fact that Monster Train 2 is very much a roguelite, your power growing as you unlock new cards and spells, as well as adding permanent upgrades that make each subsequent run easier. It also adds a pleasing sense of progress, which persists even after a run that otherwise went badly. Plus, you’ll still earn experience and potentially extra cards or magic items to assist in future escapades.
    As with all roguelites, there’s a powerful sense of repetition, with the entirety of the game’s action taking place in the relatively claustrophobic confines of your train’s four storeys. It’s fair to say though, that the random elements in runs tend to make each one feel quite different from the last, especially as you start to unlock more clans and the extra cards they offer.
    To add further variation, there are challenges, which you play on a grid, with the next one opening up once you’ve beaten its nearest neighbour. Challenge levels constrain you to the use of specific clans and each comes with ‘mutators’ that add extra conditions, like reducing the cost of spells or giving certain card types extra health or attack strength.
    You can also change your pyre heart. Each heart has different attack and defence stats, which come into play when the top floor of your train is invaded by Titans, and each comes with a special ability. These can be anything from reduced prices at the shops you encounter after each level, to more esoteric benefits, like the power to heal the front unit on each floor of the train once per battle.
    This adds to the interconnected network of effects that stack to create some truly formidable stat increases, even if it’s not easy remembering what’s active and how each of those different buffs interacts with the others. Obviously, the game automatically calculates all the bonuses on each attack and defensive play you make, but it can be tricky keeping all those layered effects in mind when you’re placing cards or activating spells.

    More Trending

    It’s also important to know which bosses you’ll be dealing with and to plan accordingly. There’s only so much you can do when you’re always partly dependent on the luck of which cards you draw, but you can still make sure you have troops available that act to counter bosses’ special abilities, hopefully containing them before they can overwhelm your defences.
    There’s notably more focus on character and story in this sequel, the plot playing out in a series of text-only encounters triggered when you return to the game’s hub between runs. Clearly inspired by Hades, it doesn’t quite equal that game’s wit and personality, but it’s nice to see additional elements fleshing out the game beyond its core, quick fire turn-based combat.
    If you loved the original Monster Train, this goes further than simply delivering more of the same. There’s fresh new strategic options and combinations of troops and spells to experiment with, as well as cards from the game’s new clans to unlock and slot into your deck. There are many games that try to copy Slay The Spire and yet very few that come close to its quality, but Monster Train 2 is certainly on track in that regard.

    Monster Train 2 review summary

    In Short: An effective expansion of the original’s deck-building roguelite structure, that adds lots of enjoyable new features and becomes one of the few games to rival Slay The Spire.
    Pros: Pacy and easy to understand, with complexity layered in as you progress. Lots of fresh systems and mechanics to try out, and as immaculately well balanced as ever.
    Cons: Eventually gets repetitive. Using a controller isn’t as intuitive as a mouse or touchscreen. Some runs can be severely compromised by random factors beyond your control.
    Score: 8/10

    Formats: PlayStation 5, Nintendo Switch, Xbox Series X/S, and PCPrice: £19.99Publisher: Good Shepherd EntertainmentDeveloper: Shiny ShoeRelease Date: 21st May 2025Age Rating: 7

    The world’s least authentic train simulatorEmail gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter.
    To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here.
    For more stories like this, check our Gaming page.

    GameCentral
    Sign up for exclusive analysis, latest releases, and bonus community content.
    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    #monster #train #review #off #rails
    Monster Train 2 review – off the rails Slay The Spire
    Monster Train 2 – running on time is the least of your problemsOne of the few deck-building roguelites to challenge Slay The Spire gets an impressive sequel that may be the most fun you can have on a locomotive. Roguelike deck builders are having a moment. A search for them on Steam will net you an astounding 861 results, making it a category that’s quite a bit more populous than you might imagine. Despite the high number of matches though, it’s a genre that’s been made famous primarily by just two games: Slay The Spire and Balatro. The latter is regularly cited as one of the best games of 2024 but it’s the former whose content and style is closest to Monster Train, which was originally released in 2020. It was a game about defending the frozen wastes of Hell against the invading forces of Heaven. In its sequel, Heaven and Hell are forced to unite to face the Titans, a new threat that could lead to the destruction of both realms. None of that’s especially relevant to the gameplay, which once again takes place onboard a quadruple-decker train. The turn-based battles are waged across the bottom three floors, with the train’s penthouse reserved for the pyre, the burning heart of your train, which in a mechanic borrowed from tower defence games is effectively the train’s power bar. Your job is to stop invaders reaching the pyre, because if they do and its health gets down to zero, it’s game over. In the original that often meant stacking your third floor with the strongest troops you had available. The sequel prefers you to mount a defence across all three floors and to encourage that, there are now room-level upgrades available, that for example will increase valour – the stat that equates to armour – to all troops, or reduce the cost of magic, making different floors more suitable for certain troop types. This adds a fresh layer of tactics and feeds into the meta game of deck building. There are now a total of 10 different clans to choose from, with each run featuring a main and support clan, both of whose cards you’ll have available as you play. Completing runs earns experience for the clans you’re using and as each one levels up, you’ll slowly gain access to more of their cards. Naturally, the game tends to gate the more powerful ones behind those higher experience levels. All of this reinforces the fact that Monster Train 2 is very much a roguelite, your power growing as you unlock new cards and spells, as well as adding permanent upgrades that make each subsequent run easier. It also adds a pleasing sense of progress, which persists even after a run that otherwise went badly. Plus, you’ll still earn experience and potentially extra cards or magic items to assist in future escapades. As with all roguelites, there’s a powerful sense of repetition, with the entirety of the game’s action taking place in the relatively claustrophobic confines of your train’s four storeys. It’s fair to say though, that the random elements in runs tend to make each one feel quite different from the last, especially as you start to unlock more clans and the extra cards they offer. To add further variation, there are challenges, which you play on a grid, with the next one opening up once you’ve beaten its nearest neighbour. Challenge levels constrain you to the use of specific clans and each comes with ‘mutators’ that add extra conditions, like reducing the cost of spells or giving certain card types extra health or attack strength. You can also change your pyre heart. Each heart has different attack and defence stats, which come into play when the top floor of your train is invaded by Titans, and each comes with a special ability. These can be anything from reduced prices at the shops you encounter after each level, to more esoteric benefits, like the power to heal the front unit on each floor of the train once per battle. This adds to the interconnected network of effects that stack to create some truly formidable stat increases, even if it’s not easy remembering what’s active and how each of those different buffs interacts with the others. Obviously, the game automatically calculates all the bonuses on each attack and defensive play you make, but it can be tricky keeping all those layered effects in mind when you’re placing cards or activating spells. More Trending It’s also important to know which bosses you’ll be dealing with and to plan accordingly. There’s only so much you can do when you’re always partly dependent on the luck of which cards you draw, but you can still make sure you have troops available that act to counter bosses’ special abilities, hopefully containing them before they can overwhelm your defences. There’s notably more focus on character and story in this sequel, the plot playing out in a series of text-only encounters triggered when you return to the game’s hub between runs. Clearly inspired by Hades, it doesn’t quite equal that game’s wit and personality, but it’s nice to see additional elements fleshing out the game beyond its core, quick fire turn-based combat. If you loved the original Monster Train, this goes further than simply delivering more of the same. There’s fresh new strategic options and combinations of troops and spells to experiment with, as well as cards from the game’s new clans to unlock and slot into your deck. There are many games that try to copy Slay The Spire and yet very few that come close to its quality, but Monster Train 2 is certainly on track in that regard. Monster Train 2 review summary In Short: An effective expansion of the original’s deck-building roguelite structure, that adds lots of enjoyable new features and becomes one of the few games to rival Slay The Spire. Pros: Pacy and easy to understand, with complexity layered in as you progress. Lots of fresh systems and mechanics to try out, and as immaculately well balanced as ever. Cons: Eventually gets repetitive. Using a controller isn’t as intuitive as a mouse or touchscreen. Some runs can be severely compromised by random factors beyond your control. Score: 8/10 Formats: PlayStation 5, Nintendo Switch, Xbox Series X/S, and PCPrice: £19.99Publisher: Good Shepherd EntertainmentDeveloper: Shiny ShoeRelease Date: 21st May 2025Age Rating: 7 The world’s least authentic train simulatorEmail gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter. To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here. For more stories like this, check our Gaming page. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy #monster #train #review #off #rails
    METRO.CO.UK
    Monster Train 2 review – off the rails Slay The Spire
    Monster Train 2 – running on time is the least of your problems (Big Fan Games) One of the few deck-building roguelites to challenge Slay The Spire gets an impressive sequel that may be the most fun you can have on a locomotive. Roguelike deck builders are having a moment. A search for them on Steam will net you an astounding 861 results, making it a category that’s quite a bit more populous than you might imagine. Despite the high number of matches though, it’s a genre that’s been made famous primarily by just two games: Slay The Spire and Balatro. The latter is regularly cited as one of the best games of 2024 but it’s the former whose content and style is closest to Monster Train, which was originally released in 2020. It was a game about defending the frozen wastes of Hell against the invading forces of Heaven. In its sequel, Heaven and Hell are forced to unite to face the Titans, a new threat that could lead to the destruction of both realms. None of that’s especially relevant to the gameplay, which once again takes place onboard a quadruple-decker train. The turn-based battles are waged across the bottom three floors, with the train’s penthouse reserved for the pyre, the burning heart of your train, which in a mechanic borrowed from tower defence games is effectively the train’s power bar. Your job is to stop invaders reaching the pyre, because if they do and its health gets down to zero, it’s game over. In the original that often meant stacking your third floor with the strongest troops you had available. The sequel prefers you to mount a defence across all three floors and to encourage that, there are now room-level upgrades available, that for example will increase valour – the stat that equates to armour – to all troops, or reduce the cost of magic, making different floors more suitable for certain troop types. This adds a fresh layer of tactics and feeds into the meta game of deck building. There are now a total of 10 different clans to choose from, with each run featuring a main and support clan, both of whose cards you’ll have available as you play. Completing runs earns experience for the clans you’re using and as each one levels up, you’ll slowly gain access to more of their cards. Naturally, the game tends to gate the more powerful ones behind those higher experience levels. All of this reinforces the fact that Monster Train 2 is very much a roguelite, your power growing as you unlock new cards and spells, as well as adding permanent upgrades that make each subsequent run easier. It also adds a pleasing sense of progress, which persists even after a run that otherwise went badly. Plus, you’ll still earn experience and potentially extra cards or magic items to assist in future escapades. As with all roguelites, there’s a powerful sense of repetition, with the entirety of the game’s action taking place in the relatively claustrophobic confines of your train’s four storeys. It’s fair to say though, that the random elements in runs tend to make each one feel quite different from the last, especially as you start to unlock more clans and the extra cards they offer. To add further variation, there are challenges, which you play on a grid, with the next one opening up once you’ve beaten its nearest neighbour. Challenge levels constrain you to the use of specific clans and each comes with ‘mutators’ that add extra conditions, like reducing the cost of spells or giving certain card types extra health or attack strength. You can also change your pyre heart. Each heart has different attack and defence stats, which come into play when the top floor of your train is invaded by Titans, and each comes with a special ability. These can be anything from reduced prices at the shops you encounter after each level, to more esoteric benefits, like the power to heal the front unit on each floor of the train once per battle. This adds to the interconnected network of effects that stack to create some truly formidable stat increases, even if it’s not easy remembering what’s active and how each of those different buffs interacts with the others. Obviously, the game automatically calculates all the bonuses on each attack and defensive play you make, but it can be tricky keeping all those layered effects in mind when you’re placing cards or activating spells. More Trending It’s also important to know which bosses you’ll be dealing with and to plan accordingly. There’s only so much you can do when you’re always partly dependent on the luck of which cards you draw, but you can still make sure you have troops available that act to counter bosses’ special abilities, hopefully containing them before they can overwhelm your defences. There’s notably more focus on character and story in this sequel, the plot playing out in a series of text-only encounters triggered when you return to the game’s hub between runs. Clearly inspired by Hades, it doesn’t quite equal that game’s wit and personality, but it’s nice to see additional elements fleshing out the game beyond its core, quick fire turn-based combat. If you loved the original Monster Train, this goes further than simply delivering more of the same. There’s fresh new strategic options and combinations of troops and spells to experiment with, as well as cards from the game’s new clans to unlock and slot into your deck. There are many games that try to copy Slay The Spire and yet very few that come close to its quality, but Monster Train 2 is certainly on track in that regard. Monster Train 2 review summary In Short: An effective expansion of the original’s deck-building roguelite structure, that adds lots of enjoyable new features and becomes one of the few games to rival Slay The Spire. Pros: Pacy and easy to understand, with complexity layered in as you progress. Lots of fresh systems and mechanics to try out, and as immaculately well balanced as ever. Cons: Eventually gets repetitive. Using a controller isn’t as intuitive as a mouse or touchscreen. Some runs can be severely compromised by random factors beyond your control. Score: 8/10 Formats: PlayStation 5 (reviewed), Nintendo Switch, Xbox Series X/S, and PCPrice: £19.99Publisher: Good Shepherd EntertainmentDeveloper: Shiny ShoeRelease Date: 21st May 2025Age Rating: 7 The world’s least authentic train simulator (Big Fan Games) Email gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter. To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here. For more stories like this, check our Gaming page. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 التعليقات 0 المشاركات
  • Interview: Rom Kosla, CIO, Hewlett Packard Enterprise

    When Rom Kosla, CIO at Hewlett Packard Enterprise, joined the technology giant in July 2023, the move represented a big shift in direction. Previously CIO at retailer Ahold Delhaize and CIO for enterprise solutions at PepsiCo, Kosla was a consumer specialist who wanted to apply his knowledge in a new sector.
    “I liked the idea of working in a different industry,” he says. “I went from consumer products to retail grocery. Moving into the tech industry was a bit nerve-wracking because the concept of who the customers are is different. But since I grew up in IT, I figured I’d have the ability to navigate my way through the company.”
    Kosla had previously worked as a project manager for Nestlé and spent time with the consultancy Deloitte. Now approaching two years with HPE, Kosla leads HPE’s technology strategy and is responsible for how the company harnesses artificial intelligenceand data. He also oversees e-commerce, app development, enterprise resource planningand security operations.
    “The role has exceeded my expectations,” he says. “When you’re a CIO at a multinational, like when I was a divisional CIO at PepsiCo, you’re in the back office. Whether it’s strategy, transformation or customer engagement, the systems are the enablers of that back-office effort. At HPE, it’s different because we are customer zero.”
    Kosla says he prefers the term “customer gold” because he wants HPE to develop high-quality products. In addition to setting the internal digital strategy, he has an outward-facing role providing expert advice to customers. That part of his role reminds him of his time at Deloitte.
    “Those are opportunities to flex my prior experience and capabilities, and learn how to take our products, enable them, and share best practices,” he says. “HPE is like any other company. We use cloud systems and software-as-a-service products, including Salesforce and others. But underneath, we have HPE powering a lot of the capabilities.”

    The press release announcing Kosla’s appointment in 2023 said HPE believed his prior experiences in the digital front-end and running complex supply chains made him the perfect person to build on its digital transformation efforts. So, how has that vision panned out?
    “What’s been interesting is helping the business and IT team think about the end-to-end value stream,” he says. “There was a lot of application-specific knowledge. The ability for processes to be optimised at an application layer versus the end-to-end value stream was only happening in certain spots.”
    Kosla discovered the organisation had spent two years moving to a private cloud installation on the company’s hardware and had consolidated 20-plus ERP systems under one SAP instance. With much of the transformation work complete, his focus turned to making the most of these assets.
    “The opportunity was not to shepherd up transformation, it was taking the next step, which was optimising,” says Kosla, explaining how he had boosted supply chain performance in his earlier roles. He’s now applying that knowledge at HPE.
    “What we’ve been doing is slicing areas of opportunity,” he says. “With the lead-to-quote process, for example, we have opportunities to optimise, depending on the type of business, such as the channel and distributors. We’re asking things like, ‘Can we get a quote out as quickly as possible, can we price it correctly, and can we rely less on human engagement?’”
    HPE announced a cost-reduction programme in March to reduce structural operating costs. The programme is expected to be implemented through fiscal year 2026 and deliver gross savings of approximately m by fiscal year 2027, including through workforce reductions. The programme of work in IT will help the company move towards these targets.
    Kosla says optimisation in financials might mean closing books faster. In the supply chain, the optimisation might be about predicting the raw materials needed to create products. He takes a term from his time in the consumer-packaged goods sector – right to play, right to win – to explain how his approach helps the business look for value-generating opportunities.
    “So, do we have the right to play, meaning do we have the skills? Where do we have the right to win, meaning do we have the funding, business resources and availability to deliver the results? We spend time focusing on which areas offer the right to play and the right to win.”

    Kosla says data and AI play a key role in these optimisations. HPE uses third-party applications with built-in AI capabilities and has developed an internal chat solution called ChatHPE, a generative AI hub used for internal processes.
    “There are lots of conversations around how we unlock the benefits of AI in the company,” he says. Professionals across the company use Microsoft Copilot in their day-to-day roles to boost productivity. Developers, meanwhile, use GitHub Copilot.
    Finally, there’s ChatHPE, which Kosla says is used according to the functional use case. HPE started developing the platform about 18 months ago. A pipeline of use cases has now been developed, including helping legal teams to review contracts, boosting customer service in operations, re-using campaign elements in marketing and improving analytics in finance.

    “We spend time focusing on which areas offer the right to play and the right to win”
    Rom Kosla, Hewlett Packard Enterprise

    “We have a significant amount of governance internally,” says Kosla, referring to ChatHPE, which is powered by Azure and OpenAI technology. “When I started, there wasn’t an internal HPE AI engine. We had to tell the teams not to use the standard tools because any data that you feed into them is ultimately extracted. So, we had to create our platform.”
    Embracing AI isn’t Kosla’s only concern. Stabilisation is a big part of what he needs to achieve during the next 12 months. He returns to HPE’s two major transformation initiatives – the shift to private cloud and the consolidation of ERP platforms – suggesting that the dual roll-out and management of these initiatives created a significant number of incidents.
    “When I look back at PepsiCo, we had about 300,000 employees and about 600,000 tickets, which means two tickets per person per year. I said to the executive committee at HPE, ‘We have 60,000 employees, and we have a couple of million tickets’, which is an insane number. The goal was to bring that number down by about 85%,” he says.
    “Now, our system uptime is 99% across our quoting and financial systems. That availability allows our business to do more than focus on internal IT. They can focus on the customer. Stabilisation means the business isn’t constantly thinking about IT systems, because it’s a challenge to execute every day when systems are going down because of issues.”

    Kosla says the long-term aim from an IT perspective is to align the technology organisation with business outcomes. In financials, for example, he wants to produce the data analytics the business needs across the supply chain and operational processes.
    “We have embedded teams that work together to look at how we enable data, like our chat capabilities, into some of the activities,” he says. “They’ll consider how we reduce friction, especially the manual steps. They’ll also consider planning, from raw materials to the manufacturing and delivery of products. That work involves partnering with the business.”
    The key to success for the IT team is to help the business unlock value quicker. “I would say that’s the biggest part for us,” says Kosla. “We don’t even like to use the word speed – we say velocity, because velocity equals direction, and that’s crucial for us. I think the business is happy with what we’ve been able to achieve, but it’s still not fast enough.”
    Being able to deliver results at pace will rely on new levels of flexibility. Rather than being wedded to a 12-month plan that maps out a series of deliverables, Kosla wants his team to work more in the moment. Prior experiences from the consumer sector give him a good sense of what excellence looks like in this area.
    “You don’t need to go back to the top, go through an annual planning review, go back down, and then have the teams twiddling their thumbs while they wait for the OK,” he says.
    “The goal is that teams are constantly working on what’s achievable during a sprint window. Many companies take that approach; I’ve done it in my prior working life. I know what can happen, and I think flexibility will drive value creation.”
    Kosla says some of the value will come from HPE’s in-house developed technologies. “One of the things that makes this role fun is that there’s a significant amount of innovation the company is doing,” he says, pointing to important technologies, such as Morpheus VM Essentials virtualisation software, the observability platform OpsRamp, and Aruba Networking Access Points.
    “What I’m proud of is that we now show up to customers with comparability,” he says, talking about the advisory part of his role. “We can say, ‘Look, we use both products, because in some cases, it’s a migration over time.’ So, for example, when a customer asks about our observability approach, we can compare our technology with other providers.”

    Kosla reflects on his career and ponders the future of the CIO role, suggesting responsibilities will vary considerably according to sector. “Digital leaders still maintain IT systems in some industries,” he says.
    “However, the rest of the business is now much more aware of technology. The blurring of lines between business and IT means it’s tougher to differentiate between the two areas. I think we’ll see more convergence.”
    Kosla says a growing desire to contain costs often creates a close relationship between IT and finance leaders. Once again, he expects further developments in that partnership. He also anticipates that cyber will remain at the forefront of digital leaders’ priority lists.
    More generally, he believes all IT professionals are becoming more focused on business priorities. “I think the blurring will continue to create interesting results, especially in technology companies,” he says. “We want to do things differently.”

    interviews with tech company IT leaders

    Interview: Joe Depa, global chief innovation officer, EY – Accounting firm EY is focused on ‘AI-ready data’ to maximise the benefits of agentic AI and enable the use of emerging frontier technologies for its business and clients.
    Interview: Cynthia Stoddard, CIO, Adobe – After nearly 10 years in post, Adobe’s CIO is still driving digital transformation and looking to deliver lasting change through technology.
    Interview: Tomer Cohen, chief product officer, LinkedIn – The professional social network’s product chief is leading the introduction of artificial intelligence for the firm’s in-house development processes and to enhance services for users.
    #interview #rom #kosla #cio #hewlett
    Interview: Rom Kosla, CIO, Hewlett Packard Enterprise
    When Rom Kosla, CIO at Hewlett Packard Enterprise, joined the technology giant in July 2023, the move represented a big shift in direction. Previously CIO at retailer Ahold Delhaize and CIO for enterprise solutions at PepsiCo, Kosla was a consumer specialist who wanted to apply his knowledge in a new sector. “I liked the idea of working in a different industry,” he says. “I went from consumer products to retail grocery. Moving into the tech industry was a bit nerve-wracking because the concept of who the customers are is different. But since I grew up in IT, I figured I’d have the ability to navigate my way through the company.” Kosla had previously worked as a project manager for Nestlé and spent time with the consultancy Deloitte. Now approaching two years with HPE, Kosla leads HPE’s technology strategy and is responsible for how the company harnesses artificial intelligenceand data. He also oversees e-commerce, app development, enterprise resource planningand security operations. “The role has exceeded my expectations,” he says. “When you’re a CIO at a multinational, like when I was a divisional CIO at PepsiCo, you’re in the back office. Whether it’s strategy, transformation or customer engagement, the systems are the enablers of that back-office effort. At HPE, it’s different because we are customer zero.” Kosla says he prefers the term “customer gold” because he wants HPE to develop high-quality products. In addition to setting the internal digital strategy, he has an outward-facing role providing expert advice to customers. That part of his role reminds him of his time at Deloitte. “Those are opportunities to flex my prior experience and capabilities, and learn how to take our products, enable them, and share best practices,” he says. “HPE is like any other company. We use cloud systems and software-as-a-service products, including Salesforce and others. But underneath, we have HPE powering a lot of the capabilities.” The press release announcing Kosla’s appointment in 2023 said HPE believed his prior experiences in the digital front-end and running complex supply chains made him the perfect person to build on its digital transformation efforts. So, how has that vision panned out? “What’s been interesting is helping the business and IT team think about the end-to-end value stream,” he says. “There was a lot of application-specific knowledge. The ability for processes to be optimised at an application layer versus the end-to-end value stream was only happening in certain spots.” Kosla discovered the organisation had spent two years moving to a private cloud installation on the company’s hardware and had consolidated 20-plus ERP systems under one SAP instance. With much of the transformation work complete, his focus turned to making the most of these assets. “The opportunity was not to shepherd up transformation, it was taking the next step, which was optimising,” says Kosla, explaining how he had boosted supply chain performance in his earlier roles. He’s now applying that knowledge at HPE. “What we’ve been doing is slicing areas of opportunity,” he says. “With the lead-to-quote process, for example, we have opportunities to optimise, depending on the type of business, such as the channel and distributors. We’re asking things like, ‘Can we get a quote out as quickly as possible, can we price it correctly, and can we rely less on human engagement?’” HPE announced a cost-reduction programme in March to reduce structural operating costs. The programme is expected to be implemented through fiscal year 2026 and deliver gross savings of approximately m by fiscal year 2027, including through workforce reductions. The programme of work in IT will help the company move towards these targets. Kosla says optimisation in financials might mean closing books faster. In the supply chain, the optimisation might be about predicting the raw materials needed to create products. He takes a term from his time in the consumer-packaged goods sector – right to play, right to win – to explain how his approach helps the business look for value-generating opportunities. “So, do we have the right to play, meaning do we have the skills? Where do we have the right to win, meaning do we have the funding, business resources and availability to deliver the results? We spend time focusing on which areas offer the right to play and the right to win.” Kosla says data and AI play a key role in these optimisations. HPE uses third-party applications with built-in AI capabilities and has developed an internal chat solution called ChatHPE, a generative AI hub used for internal processes. “There are lots of conversations around how we unlock the benefits of AI in the company,” he says. Professionals across the company use Microsoft Copilot in their day-to-day roles to boost productivity. Developers, meanwhile, use GitHub Copilot. Finally, there’s ChatHPE, which Kosla says is used according to the functional use case. HPE started developing the platform about 18 months ago. A pipeline of use cases has now been developed, including helping legal teams to review contracts, boosting customer service in operations, re-using campaign elements in marketing and improving analytics in finance. “We spend time focusing on which areas offer the right to play and the right to win” Rom Kosla, Hewlett Packard Enterprise “We have a significant amount of governance internally,” says Kosla, referring to ChatHPE, which is powered by Azure and OpenAI technology. “When I started, there wasn’t an internal HPE AI engine. We had to tell the teams not to use the standard tools because any data that you feed into them is ultimately extracted. So, we had to create our platform.” Embracing AI isn’t Kosla’s only concern. Stabilisation is a big part of what he needs to achieve during the next 12 months. He returns to HPE’s two major transformation initiatives – the shift to private cloud and the consolidation of ERP platforms – suggesting that the dual roll-out and management of these initiatives created a significant number of incidents. “When I look back at PepsiCo, we had about 300,000 employees and about 600,000 tickets, which means two tickets per person per year. I said to the executive committee at HPE, ‘We have 60,000 employees, and we have a couple of million tickets’, which is an insane number. The goal was to bring that number down by about 85%,” he says. “Now, our system uptime is 99% across our quoting and financial systems. That availability allows our business to do more than focus on internal IT. They can focus on the customer. Stabilisation means the business isn’t constantly thinking about IT systems, because it’s a challenge to execute every day when systems are going down because of issues.” Kosla says the long-term aim from an IT perspective is to align the technology organisation with business outcomes. In financials, for example, he wants to produce the data analytics the business needs across the supply chain and operational processes. “We have embedded teams that work together to look at how we enable data, like our chat capabilities, into some of the activities,” he says. “They’ll consider how we reduce friction, especially the manual steps. They’ll also consider planning, from raw materials to the manufacturing and delivery of products. That work involves partnering with the business.” The key to success for the IT team is to help the business unlock value quicker. “I would say that’s the biggest part for us,” says Kosla. “We don’t even like to use the word speed – we say velocity, because velocity equals direction, and that’s crucial for us. I think the business is happy with what we’ve been able to achieve, but it’s still not fast enough.” Being able to deliver results at pace will rely on new levels of flexibility. Rather than being wedded to a 12-month plan that maps out a series of deliverables, Kosla wants his team to work more in the moment. Prior experiences from the consumer sector give him a good sense of what excellence looks like in this area. “You don’t need to go back to the top, go through an annual planning review, go back down, and then have the teams twiddling their thumbs while they wait for the OK,” he says. “The goal is that teams are constantly working on what’s achievable during a sprint window. Many companies take that approach; I’ve done it in my prior working life. I know what can happen, and I think flexibility will drive value creation.” Kosla says some of the value will come from HPE’s in-house developed technologies. “One of the things that makes this role fun is that there’s a significant amount of innovation the company is doing,” he says, pointing to important technologies, such as Morpheus VM Essentials virtualisation software, the observability platform OpsRamp, and Aruba Networking Access Points. “What I’m proud of is that we now show up to customers with comparability,” he says, talking about the advisory part of his role. “We can say, ‘Look, we use both products, because in some cases, it’s a migration over time.’ So, for example, when a customer asks about our observability approach, we can compare our technology with other providers.” Kosla reflects on his career and ponders the future of the CIO role, suggesting responsibilities will vary considerably according to sector. “Digital leaders still maintain IT systems in some industries,” he says. “However, the rest of the business is now much more aware of technology. The blurring of lines between business and IT means it’s tougher to differentiate between the two areas. I think we’ll see more convergence.” Kosla says a growing desire to contain costs often creates a close relationship between IT and finance leaders. Once again, he expects further developments in that partnership. He also anticipates that cyber will remain at the forefront of digital leaders’ priority lists. More generally, he believes all IT professionals are becoming more focused on business priorities. “I think the blurring will continue to create interesting results, especially in technology companies,” he says. “We want to do things differently.” interviews with tech company IT leaders Interview: Joe Depa, global chief innovation officer, EY – Accounting firm EY is focused on ‘AI-ready data’ to maximise the benefits of agentic AI and enable the use of emerging frontier technologies for its business and clients. Interview: Cynthia Stoddard, CIO, Adobe – After nearly 10 years in post, Adobe’s CIO is still driving digital transformation and looking to deliver lasting change through technology. Interview: Tomer Cohen, chief product officer, LinkedIn – The professional social network’s product chief is leading the introduction of artificial intelligence for the firm’s in-house development processes and to enhance services for users. #interview #rom #kosla #cio #hewlett
    WWW.COMPUTERWEEKLY.COM
    Interview: Rom Kosla, CIO, Hewlett Packard Enterprise
    When Rom Kosla, CIO at Hewlett Packard Enterprise (HPE), joined the technology giant in July 2023, the move represented a big shift in direction. Previously CIO at retailer Ahold Delhaize and CIO for enterprise solutions at PepsiCo, Kosla was a consumer specialist who wanted to apply his knowledge in a new sector. “I liked the idea of working in a different industry,” he says. “I went from consumer products to retail grocery. Moving into the tech industry was a bit nerve-wracking because the concept of who the customers are is different. But since I grew up in IT, I figured I’d have the ability to navigate my way through the company.” Kosla had previously worked as a project manager for Nestlé and spent time with the consultancy Deloitte. Now approaching two years with HPE, Kosla leads HPE’s technology strategy and is responsible for how the company harnesses artificial intelligence (AI) and data. He also oversees e-commerce, app development, enterprise resource planning (ERP) and security operations. “The role has exceeded my expectations,” he says. “When you’re a CIO at a multinational, like when I was a divisional CIO at PepsiCo, you’re in the back office. Whether it’s strategy, transformation or customer engagement, the systems are the enablers of that back-office effort. At HPE, it’s different because we are customer zero.” Kosla says he prefers the term “customer gold” because he wants HPE to develop high-quality products. In addition to setting the internal digital strategy, he has an outward-facing role providing expert advice to customers. That part of his role reminds him of his time at Deloitte. “Those are opportunities to flex my prior experience and capabilities, and learn how to take our products, enable them, and share best practices,” he says. “HPE is like any other company. We use cloud systems and software-as-a-service products, including Salesforce and others. But underneath, we have HPE powering a lot of the capabilities.” The press release announcing Kosla’s appointment in 2023 said HPE believed his prior experiences in the digital front-end and running complex supply chains made him the perfect person to build on its digital transformation efforts. So, how has that vision panned out? “What’s been interesting is helping the business and IT team think about the end-to-end value stream,” he says. “There was a lot of application-specific knowledge. The ability for processes to be optimised at an application layer versus the end-to-end value stream was only happening in certain spots.” Kosla discovered the organisation had spent two years moving to a private cloud installation on the company’s hardware and had consolidated 20-plus ERP systems under one SAP instance. With much of the transformation work complete, his focus turned to making the most of these assets. “The opportunity was not to shepherd up transformation, it was taking the next step, which was optimising,” says Kosla, explaining how he had boosted supply chain performance in his earlier roles. He’s now applying that knowledge at HPE. “What we’ve been doing is slicing areas of opportunity,” he says. “With the lead-to-quote process, for example, we have opportunities to optimise, depending on the type of business, such as the channel and distributors. We’re asking things like, ‘Can we get a quote out as quickly as possible, can we price it correctly, and can we rely less on human engagement?’” HPE announced a cost-reduction programme in March to reduce structural operating costs. The programme is expected to be implemented through fiscal year 2026 and deliver gross savings of approximately $350m by fiscal year 2027, including through workforce reductions. The programme of work in IT will help the company move towards these targets. Kosla says optimisation in financials might mean closing books faster. In the supply chain, the optimisation might be about predicting the raw materials needed to create products. He takes a term from his time in the consumer-packaged goods sector – right to play, right to win – to explain how his approach helps the business look for value-generating opportunities. “So, do we have the right to play, meaning do we have the skills? Where do we have the right to win, meaning do we have the funding, business resources and availability to deliver the results? We spend time focusing on which areas offer the right to play and the right to win.” Kosla says data and AI play a key role in these optimisations. HPE uses third-party applications with built-in AI capabilities and has developed an internal chat solution called ChatHPE, a generative AI hub used for internal processes. “There are lots of conversations around how we unlock the benefits of AI in the company,” he says. Professionals across the company use Microsoft Copilot in their day-to-day roles to boost productivity. Developers, meanwhile, use GitHub Copilot. Finally, there’s ChatHPE, which Kosla says is used according to the functional use case. HPE started developing the platform about 18 months ago. A pipeline of use cases has now been developed, including helping legal teams to review contracts, boosting customer service in operations, re-using campaign elements in marketing and improving analytics in finance. “We spend time focusing on which areas offer the right to play and the right to win” Rom Kosla, Hewlett Packard Enterprise “We have a significant amount of governance internally,” says Kosla, referring to ChatHPE, which is powered by Azure and OpenAI technology. “When I started, there wasn’t an internal HPE AI engine. We had to tell the teams not to use the standard tools because any data that you feed into them is ultimately extracted. So, we had to create our platform.” Embracing AI isn’t Kosla’s only concern. Stabilisation is a big part of what he needs to achieve during the next 12 months. He returns to HPE’s two major transformation initiatives – the shift to private cloud and the consolidation of ERP platforms – suggesting that the dual roll-out and management of these initiatives created a significant number of incidents. “When I look back at PepsiCo, we had about 300,000 employees and about 600,000 tickets, which means two tickets per person per year. I said to the executive committee at HPE, ‘We have 60,000 employees, and we have a couple of million tickets’, which is an insane number. The goal was to bring that number down by about 85%,” he says. “Now, our system uptime is 99% across our quoting and financial systems. That availability allows our business to do more than focus on internal IT. They can focus on the customer. Stabilisation means the business isn’t constantly thinking about IT systems, because it’s a challenge to execute every day when systems are going down because of issues.” Kosla says the long-term aim from an IT perspective is to align the technology organisation with business outcomes. In financials, for example, he wants to produce the data analytics the business needs across the supply chain and operational processes. “We have embedded teams that work together to look at how we enable data, like our chat capabilities, into some of the activities,” he says. “They’ll consider how we reduce friction, especially the manual steps. They’ll also consider planning, from raw materials to the manufacturing and delivery of products. That work involves partnering with the business.” The key to success for the IT team is to help the business unlock value quicker. “I would say that’s the biggest part for us,” says Kosla. “We don’t even like to use the word speed – we say velocity, because velocity equals direction, and that’s crucial for us. I think the business is happy with what we’ve been able to achieve, but it’s still not fast enough.” Being able to deliver results at pace will rely on new levels of flexibility. Rather than being wedded to a 12-month plan that maps out a series of deliverables, Kosla wants his team to work more in the moment. Prior experiences from the consumer sector give him a good sense of what excellence looks like in this area. “You don’t need to go back to the top, go through an annual planning review, go back down, and then have the teams twiddling their thumbs while they wait for the OK,” he says. “The goal is that teams are constantly working on what’s achievable during a sprint window. Many companies take that approach; I’ve done it in my prior working life. I know what can happen, and I think flexibility will drive value creation.” Kosla says some of the value will come from HPE’s in-house developed technologies. “One of the things that makes this role fun is that there’s a significant amount of innovation the company is doing,” he says, pointing to important technologies, such as Morpheus VM Essentials virtualisation software, the observability platform OpsRamp, and Aruba Networking Access Points. “What I’m proud of is that we now show up to customers with comparability,” he says, talking about the advisory part of his role. “We can say, ‘Look, we use both products, because in some cases, it’s a migration over time.’ So, for example, when a customer asks about our observability approach, we can compare our technology with other providers.” Kosla reflects on his career and ponders the future of the CIO role, suggesting responsibilities will vary considerably according to sector. “Digital leaders still maintain IT systems in some industries,” he says. “However, the rest of the business is now much more aware of technology. The blurring of lines between business and IT means it’s tougher to differentiate between the two areas. I think we’ll see more convergence.” Kosla says a growing desire to contain costs often creates a close relationship between IT and finance leaders. Once again, he expects further developments in that partnership. He also anticipates that cyber will remain at the forefront of digital leaders’ priority lists. More generally, he believes all IT professionals are becoming more focused on business priorities. “I think the blurring will continue to create interesting results, especially in technology companies,” he says. “We want to do things differently.” Read more interviews with tech company IT leaders Interview: Joe Depa, global chief innovation officer, EY – Accounting firm EY is focused on ‘AI-ready data’ to maximise the benefits of agentic AI and enable the use of emerging frontier technologies for its business and clients. Interview: Cynthia Stoddard, CIO, Adobe – After nearly 10 years in post, Adobe’s CIO is still driving digital transformation and looking to deliver lasting change through technology. Interview: Tomer Cohen, chief product officer, LinkedIn – The professional social network’s product chief is leading the introduction of artificial intelligence for the firm’s in-house development processes and to enhance services for users.
    0 التعليقات 0 المشاركات
  • Building AI Applications in Ruby

    This is the second in a multi-part series on creating web applications with generative AI integration. Part 1 focused on explaining the AI stack and why the application layer is the best place in the stack to be. Check it out here.

    Table of Contents

    Introduction

    I thought spas were supposed to be relaxing?

    Microservices are for Macrocompanies

    Ruby and Python: Two Sides of the Same Coin

    Recent AI Based Gems

    Summary

    Introduction

    It’s not often that you hear the Ruby language mentioned when discussing AI.

    Python, of course, is the king in this world, and for good reason. The community has coalesced around the language. Most model training is done in PyTorch or TensorFlow these days. Scikit-learn and Keras are also very popular. RAG frameworks such as LangChain and LlamaIndex cater primarily to Python.

    However, when it comes to building web applications with AI integration, I believe Ruby is the better language.

    As the co-founder of an agency dedicated to building MVPs with generative AI integration, I frequently hear potential clients complaining about two things:

    Applications take too long to build

    Developers are quoting insane prices to build custom web apps

    These complaints have a common source: complexity. Modern web apps have a lot more complexity in them than in the good ol’ days. But why is this? Are the benefits brought by complexity worth the cost?

    I thought spas were supposed to be relaxing?

    One big piece of the puzzle is the recent rise of single-page applications. The most popular stack used today in building modern SPAs is MERN . The stack is popular for a few reasons:

    It is a JavaScript-only stack, across both front-end and back-end. Having to only code in only one language is pretty nice!

    SPAs can offer dynamic designs and a “smooth” user experience. Smooth here means that when some piece of data changes, only a part of the site is updated, as opposed to having to reload the whole page. Of course, if you don’t have a modern smartphone, SPAs won’t feel so smooth, as they tend to be pretty heavy. All that JavaScript starts to drag down the performance.

    There is a large ecosystem of libraries and developers with experience in this stack. This is pretty circular logic: is the stack popular because of the ecosystem, or is there an ecosystem because of the popularity? Either way, this point stands.React was created by Meta.

    Lots of money and effort has been thrown at the library, helping to polish and promote the product.

    Unfortunately, there are some downsides of working in the MERN stack, the most critical being the sheer complexity.

    Traditional web development was done using the Model-View-Controllerparadigm. In MVC, all of the logic managing a user’s session is handled in the backend, on the server. Something like fetching a user’s data was done via function calls and SQL statements in the backend. The backend then serves fully built HTML and CSS to the browser, which just has to display it. Hence the name “server”.

    In a SPA, this logic is handled on the user’s browser, in the frontend. SPAs have to handle UI state, application state, and sometimes even server state all in the browser. API calls have to be made to the backend to fetch user data. There is still quite a bit of logic on the backend, mainly exposing data and functionality through APIs.

    To illustrate the difference, let me use the analogy of a commercial kitchen. The customer will be the frontend and the kitchen will be the backend.

    MVCs vs. SPAs. Image generated by ChatGPT.

    Traditional MVC apps are like dining at a full-service restaurant. Yes, there is a lot of complexityin the backend. But the frontend experience is simple and satisfying: all the customer has to do is pick up a fork and eat their food.

    SPAs are like eating at a buffet-style dining restaurant. There is still quite a bit of complexity in the kitchen. But now the customer also has to decide what food to grab, how to combine them, how to arrange them on the plate, where to put the plate when finished, etc.

    Andrej Karpathy had a tweet recently discussing his frustration with attempting to build web apps in 2025. It can be overwhelming for those new to the space.

    The reality of building web apps in 2025 is that it's a bit like assembling IKEA furniture. There's no "full-stack" product with batteries included, you have to piece together and configure many individual services:– frontend / backend– hosting…— Andrej KarpathyMarch 27, 2025

    In order to build MVPs with AI integration rapidly, our agency has decided to forgo the SPA and instead go with the traditional MVC approach. In particular, we have found Ruby on Railsto be the framework best suited to quickly developing and deploying quality apps with AI integration. Ruby on Rails was developed by David Heinemeier Hansson in 2004 and has long been known as a great web framework, but I would argue it has recently made leaps in its ability to incorporate AI into apps, as we will see.

    Django is the most popular Python web framework, and also has a more traditional pattern of development. Unfortunately, in our testing we found Django was simply not as full-featured or “batteries included” as Rails is. As a simple example, Django has no built-in background job system. Nearly all of our apps incorporate background jobs, so to not include this was disappointing. We also prefer how Rails emphasizes simplicity, with Rails 8 encouraging developers to easily self-host their apps instead of going through a provider like Heroku. They also recently released a stack of tools meant to replace external services like Redis.

    “But what about the smooth user experience?” you might ask. The truth is that modern Rails includes several ways of crafting SPA-like experiences without all of the heavy JavaScript. The primary tool is Hotwire, which bundles tools like Turbo and Stimulus. Turbo lets you dynamically change pieces of HTML on your webpage without writing custom JavaScript. For the times where you do need to include custom JavaScript, Stimulus is a minimal JavaScript framework that lets you do just that. Even if you want to use React, you can do so with the react-rails gem. So you can have your cake, and eat it too!

    SPAs are not the only reason for the increase in complexity, however. Another has to do with the advent of the microservices architecture.

    Microservices are for Macrocompanies

    Once again, we find ourselves comparing the simple past with the complexity of today.

    In the past, software was primarily developed as monoliths. A monolithic application means that all the different parts of your app — such as the user interface, business logic, and data handling — are developed, tested, and deployed as one single unit. The code is all typically housed in a single repo.

    Working with a monolith is simple and satisfying. Running a development setup for testing purposes is easy. You are working with a single database schema containing all of your tables, making queries and joins straightforward. Deployment is simple, since you just have one container to look at and modify.

    However, once your company scales to the size of a Google or Amazon, real problems begin to emerge. With hundreds or thousands of developers contributing simultaneously to a single codebase, coordinating changes and managing merge conflicts becomes increasingly difficult. Deployments also become more complex and risky, since even minor changes can blow up the entire application!

    To manage these issues, large companies began to coalesce around the microservices architecture. This is a style of programming where you design your codebase as a set of small, autonomous services. Each service owns its own codebase, data storage, and deployment pipelines. As a simple example, instead of stuffing all of your logic regarding an OpenAI client into your main app, you can move that logic into its own service. To call that service, you would then typically make REST calls, as opposed to function calls. This ups the complexity, but resolves the merge conflict and deployment issues, since each team in the organization gets to work on their own island of code.

    Another benefit to using microservices is that they allow for a polyglot tech stack. This means that each team can code up their service using whatever language they prefer. If one team prefers JavaScript while another likes Python, this is no issue. When we first began our agency, this idea of a polyglot stack pushed us to use a microservices architecture. Not because we had a large team, but because we each wanted to use the “best” language for each functionality. This meant:

    Using Ruby on Rails for web development. It’s been battle-tested in this area for decades.

    Using Python for the AI integration, perhaps deployed with something like FastAPI. Serious AI work requires Python, I was led to believe.

    Two different languages, each focused on its area of specialty. What could go wrong?

    Unfortunately, we found the process of development frustrating. Just setting up our dev environment was time-consuming. Having to wrangle Docker compose files and manage inter-service communication made us wish we could go back to the beauty and simplicity of the monolith. Having to make a REST call and set up the appropriate routing in FastAPI instead of making a simple function call sucked.

    “Surely we can’t develop AI apps in pure Ruby,” I thought. And then I gave it a try.

    And I’m glad I did.

    I found the process of developing an MVP with AI integration in Ruby very satisfying. We were able to sprint where before we were jogging. I loved the emphasis on beauty, simplicity, and developer happiness in the Ruby community. And I found the state of the AI ecosystem in Ruby to be surprisingly mature and getting better every day.

    If you are a Python programmer and are scared off by learning a new language like I was, let me comfort you by discussing the similarities between the Ruby and Python languages.

    Ruby and Python: Two Sides of the Same Coin

    I consider Python and Ruby to be like cousins. Both languages incorporate:

    High-level Interpretation: This means they abstract away a lot of the complexity of low-level programming details, such as memory management.

    Dynamic Typing: Neither language requires you to specify if a variable is an int, float, string, etc. The types are checked at runtime.

    Object-Oriented Programming: Both languages are object-oriented. Both support classes, inheritance, polymorphism, etc. Ruby is more “pure”, in the sense that literally everything is an object, whereas in Python a few thingsare not objects.

    Readable and Concise Syntax: Both are considered easy to learn. Either is great for a first-time learner.

    Wide Ecosystem of Packages: Packages to do all sorts of cool things are available in both languages. In Python they are called libraries, and in Ruby they are called gems.

    The primary difference between the two languages lies in their philosophy and design principles. Python’s core philosophy can be described as:

    There should be one — and preferably only one — obvious way to do something.

    In theory, this should emphasize simplicity, readability, and clarity. Ruby’s philosophy can be described as:

    There’s always more than one way to do something. Maximize developer happiness.

    This was a shock to me when I switched over from Python. Check out this simple example emphasizing this philosophical difference:

    # A fight over philosophy: iterating over an array
    # Pythonic way
    for i in range:
    print# Ruby way, option 1.each do |i|
    puts i
    end

    # Ruby way, option 2
    for i in 1..5
    puts i
    end

    # Ruby way, option 3
    5.times do |i|
    puts i + 1
    end

    # Ruby way, option 4.each { |i| puts i }

    Another difference between the two is syntax style. Python primarily uses indentation to denote code blocks, while Ruby uses do…end or {…} blocks. Most include indentation inside Ruby blocks, but this is entirely optional. Examples of these syntactic differences can be seen in the code shown above.

    There are a lot of other little differences to learn. For example, in Python string interpolation is done using f-strings: f"Hello, {name}!", while in Ruby they are done using hashtags: "Hello, #{name}!". Within a few months, I think any competent Python programmer can transfer their proficiency over to Ruby.

    Recent AI-based Gems

    Despite not being in the conversation when discussing AI, Ruby has had some recent advancements in the world of gems. I will highlight some of the most impressive recent releases that we have been using in our agency to build AI apps:

    RubyLLM — Any GitHub repo that gets more than 2k stars within a few weeks of release deserves a mention, and RubyLLM is definitely worthy. I have used many clunky implementations of LLM providers from libraries like LangChain and LlamaIndex, so using RubyLLM was like a breath of fresh air. As a simple example, let’s take a look at a tutorial demonstrating multi-turn conversations:

    require 'ruby_llm'

    # Create a model and give it instructions
    chat = RubyLLM.chat
    chat.with_instructions "You are a friendly Ruby expert who loves to help beginners."

    # Multi-turn conversation
    chat.ask "Hi! What does attr_reader do in Ruby?"
    # => "Ruby creates a getter method for each symbol...

    # Stream responses in real time
    chat.ask "Could you give me a short example?" do |chunk|
    print chunk.content
    end
    # => "Sure!
    # ```ruby
    # class Person
    # attr...

    Simply amazing. Multi-turn conversations are handled automatically for you. Streaming is a breeze. Compare this to a similar implementation in LangChain:

    from langchain_openai import ChatOpenAI
    from langchain_core.schema import SystemMessage, HumanMessage, AIMessage
    from langchain_core.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

    SYSTEM_PROMPT = "You are a friendly Ruby expert who loves to help beginners."
    chat = ChatOpenAI])

    history =def ask-> None:
    """Stream the answer token-by-token and keep the context in memory."""
    history.append)
    # .stream yields message chunks as they arrive
    for chunk in chat.stream:
    printprint# newline after the answer
    # the final chunk has the full message content
    history.append)

    askaskYikes. And it’s important to note that this is a grug implementation. Want to know how LangChain really expects you to manage memory? Check out these links, but grab a bucket first; you may get sick.

    Neighbors — This is an excellent library to use for nearest-neighbors search in a Rails application. Very useful in a RAG setup. It integrates with Postgres, SQLite, MySQL, MariaDB, and more. It was written by Andrew Kane, the same guy who wrote the pgvector extension that allows Postgres to behave as a vector database.

    Async — This gem had its first official release back in December 2024, and it has been making waves in the Ruby community. Async is a fiber-based framework for Ruby that runs non-blocking I/O tasks concurrently while letting you write simple, sequential code. Fibers are like mini-threads that each have their own mini call stack. While not strictly a gem for AI, it has helped us create features like web scrapers that run blazingly fast across thousands of pages. We have also used it to handle streaming of chunks from LLMs.

    Torch.rb — If you are interested in training deep learning models, then surely you have heard of PyTorch. Well, PyTorch is built on LibTorch, which essentially has a lot of C/C++ code under the hood to perform ML operations quickly. Andrew Kane took LibTorch and made a Ruby adapter over it to create Torch.rb, essentially a Ruby version of PyTorch. Andrew Kane has been a hero in the Ruby AI world, authoring dozens of ML gems for Ruby.

    Summary

    In short: building a web application with AI integration quickly and cheaply requires a monolithic architecture. A monolith demands a monolingual application, which is necessary if your end goal is quality apps delivered with speed. Your main options are either Python or Ruby. If you go with Python, you will probably use Django for your web framework. If you go with Ruby, you will be using Ruby on Rails. At our agency, we found Django’s lack of features disappointing. Rails has impressed us with its feature set and emphasis on simplicity. We were thrilled to find almost no issues on the AI side.

    Of course, there are times where you will not want to use Ruby. If you are conducting research in AI or training machine learning models from scratch, then you will likely want to stick with Python. Research almost never involves building Web Applications. At most you’ll build a simple interface or dashboard in a notebook, but nothing production-ready. You’ll likely want the latest PyTorch updates to ensure your training runs quickly. You may even dive into low-level C/C++ programming to squeeze as much performance as you can out of your hardware. Maybe you’ll even try your hand at Mojo.

    But if your goal is to integrate the latest LLMs — either open or closed source — into web applications, then we believe Ruby to be the far superior option. Give it a shot yourselves!

    In part three of this series, I will dive into a fun experiment: just how simple can we make a web application with AI integration? Stay tuned.

     If you’d like a custom web application with generative AI integration, visit losangelesaiapps.com

    The post Building AI Applications in Ruby appeared first on Towards Data Science.
    #building #applications #ruby
    Building AI Applications in Ruby
    This is the second in a multi-part series on creating web applications with generative AI integration. Part 1 focused on explaining the AI stack and why the application layer is the best place in the stack to be. Check it out here. Table of Contents Introduction I thought spas were supposed to be relaxing? Microservices are for Macrocompanies Ruby and Python: Two Sides of the Same Coin Recent AI Based Gems Summary Introduction It’s not often that you hear the Ruby language mentioned when discussing AI. Python, of course, is the king in this world, and for good reason. The community has coalesced around the language. Most model training is done in PyTorch or TensorFlow these days. Scikit-learn and Keras are also very popular. RAG frameworks such as LangChain and LlamaIndex cater primarily to Python. However, when it comes to building web applications with AI integration, I believe Ruby is the better language. As the co-founder of an agency dedicated to building MVPs with generative AI integration, I frequently hear potential clients complaining about two things: Applications take too long to build Developers are quoting insane prices to build custom web apps These complaints have a common source: complexity. Modern web apps have a lot more complexity in them than in the good ol’ days. But why is this? Are the benefits brought by complexity worth the cost? I thought spas were supposed to be relaxing? One big piece of the puzzle is the recent rise of single-page applications. The most popular stack used today in building modern SPAs is MERN . The stack is popular for a few reasons: It is a JavaScript-only stack, across both front-end and back-end. Having to only code in only one language is pretty nice! SPAs can offer dynamic designs and a “smooth” user experience. Smooth here means that when some piece of data changes, only a part of the site is updated, as opposed to having to reload the whole page. Of course, if you don’t have a modern smartphone, SPAs won’t feel so smooth, as they tend to be pretty heavy. All that JavaScript starts to drag down the performance. There is a large ecosystem of libraries and developers with experience in this stack. This is pretty circular logic: is the stack popular because of the ecosystem, or is there an ecosystem because of the popularity? Either way, this point stands.React was created by Meta. Lots of money and effort has been thrown at the library, helping to polish and promote the product. Unfortunately, there are some downsides of working in the MERN stack, the most critical being the sheer complexity. Traditional web development was done using the Model-View-Controllerparadigm. In MVC, all of the logic managing a user’s session is handled in the backend, on the server. Something like fetching a user’s data was done via function calls and SQL statements in the backend. The backend then serves fully built HTML and CSS to the browser, which just has to display it. Hence the name “server”. In a SPA, this logic is handled on the user’s browser, in the frontend. SPAs have to handle UI state, application state, and sometimes even server state all in the browser. API calls have to be made to the backend to fetch user data. There is still quite a bit of logic on the backend, mainly exposing data and functionality through APIs. To illustrate the difference, let me use the analogy of a commercial kitchen. The customer will be the frontend and the kitchen will be the backend. MVCs vs. SPAs. Image generated by ChatGPT. Traditional MVC apps are like dining at a full-service restaurant. Yes, there is a lot of complexityin the backend. But the frontend experience is simple and satisfying: all the customer has to do is pick up a fork and eat their food. SPAs are like eating at a buffet-style dining restaurant. There is still quite a bit of complexity in the kitchen. But now the customer also has to decide what food to grab, how to combine them, how to arrange them on the plate, where to put the plate when finished, etc. Andrej Karpathy had a tweet recently discussing his frustration with attempting to build web apps in 2025. It can be overwhelming for those new to the space. The reality of building web apps in 2025 is that it's a bit like assembling IKEA furniture. There's no "full-stack" product with batteries included, you have to piece together and configure many individual services:– frontend / backend– hosting…— Andrej KarpathyMarch 27, 2025 In order to build MVPs with AI integration rapidly, our agency has decided to forgo the SPA and instead go with the traditional MVC approach. In particular, we have found Ruby on Railsto be the framework best suited to quickly developing and deploying quality apps with AI integration. Ruby on Rails was developed by David Heinemeier Hansson in 2004 and has long been known as a great web framework, but I would argue it has recently made leaps in its ability to incorporate AI into apps, as we will see. Django is the most popular Python web framework, and also has a more traditional pattern of development. Unfortunately, in our testing we found Django was simply not as full-featured or “batteries included” as Rails is. As a simple example, Django has no built-in background job system. Nearly all of our apps incorporate background jobs, so to not include this was disappointing. We also prefer how Rails emphasizes simplicity, with Rails 8 encouraging developers to easily self-host their apps instead of going through a provider like Heroku. They also recently released a stack of tools meant to replace external services like Redis. “But what about the smooth user experience?” you might ask. The truth is that modern Rails includes several ways of crafting SPA-like experiences without all of the heavy JavaScript. The primary tool is Hotwire, which bundles tools like Turbo and Stimulus. Turbo lets you dynamically change pieces of HTML on your webpage without writing custom JavaScript. For the times where you do need to include custom JavaScript, Stimulus is a minimal JavaScript framework that lets you do just that. Even if you want to use React, you can do so with the react-rails gem. So you can have your cake, and eat it too! SPAs are not the only reason for the increase in complexity, however. Another has to do with the advent of the microservices architecture. Microservices are for Macrocompanies Once again, we find ourselves comparing the simple past with the complexity of today. In the past, software was primarily developed as monoliths. A monolithic application means that all the different parts of your app — such as the user interface, business logic, and data handling — are developed, tested, and deployed as one single unit. The code is all typically housed in a single repo. Working with a monolith is simple and satisfying. Running a development setup for testing purposes is easy. You are working with a single database schema containing all of your tables, making queries and joins straightforward. Deployment is simple, since you just have one container to look at and modify. However, once your company scales to the size of a Google or Amazon, real problems begin to emerge. With hundreds or thousands of developers contributing simultaneously to a single codebase, coordinating changes and managing merge conflicts becomes increasingly difficult. Deployments also become more complex and risky, since even minor changes can blow up the entire application! To manage these issues, large companies began to coalesce around the microservices architecture. This is a style of programming where you design your codebase as a set of small, autonomous services. Each service owns its own codebase, data storage, and deployment pipelines. As a simple example, instead of stuffing all of your logic regarding an OpenAI client into your main app, you can move that logic into its own service. To call that service, you would then typically make REST calls, as opposed to function calls. This ups the complexity, but resolves the merge conflict and deployment issues, since each team in the organization gets to work on their own island of code. Another benefit to using microservices is that they allow for a polyglot tech stack. This means that each team can code up their service using whatever language they prefer. If one team prefers JavaScript while another likes Python, this is no issue. When we first began our agency, this idea of a polyglot stack pushed us to use a microservices architecture. Not because we had a large team, but because we each wanted to use the “best” language for each functionality. This meant: Using Ruby on Rails for web development. It’s been battle-tested in this area for decades. Using Python for the AI integration, perhaps deployed with something like FastAPI. Serious AI work requires Python, I was led to believe. Two different languages, each focused on its area of specialty. What could go wrong? Unfortunately, we found the process of development frustrating. Just setting up our dev environment was time-consuming. Having to wrangle Docker compose files and manage inter-service communication made us wish we could go back to the beauty and simplicity of the monolith. Having to make a REST call and set up the appropriate routing in FastAPI instead of making a simple function call sucked. “Surely we can’t develop AI apps in pure Ruby,” I thought. And then I gave it a try. And I’m glad I did. I found the process of developing an MVP with AI integration in Ruby very satisfying. We were able to sprint where before we were jogging. I loved the emphasis on beauty, simplicity, and developer happiness in the Ruby community. And I found the state of the AI ecosystem in Ruby to be surprisingly mature and getting better every day. If you are a Python programmer and are scared off by learning a new language like I was, let me comfort you by discussing the similarities between the Ruby and Python languages. Ruby and Python: Two Sides of the Same Coin I consider Python and Ruby to be like cousins. Both languages incorporate: High-level Interpretation: This means they abstract away a lot of the complexity of low-level programming details, such as memory management. Dynamic Typing: Neither language requires you to specify if a variable is an int, float, string, etc. The types are checked at runtime. Object-Oriented Programming: Both languages are object-oriented. Both support classes, inheritance, polymorphism, etc. Ruby is more “pure”, in the sense that literally everything is an object, whereas in Python a few thingsare not objects. Readable and Concise Syntax: Both are considered easy to learn. Either is great for a first-time learner. Wide Ecosystem of Packages: Packages to do all sorts of cool things are available in both languages. In Python they are called libraries, and in Ruby they are called gems. The primary difference between the two languages lies in their philosophy and design principles. Python’s core philosophy can be described as: There should be one — and preferably only one — obvious way to do something. In theory, this should emphasize simplicity, readability, and clarity. Ruby’s philosophy can be described as: There’s always more than one way to do something. Maximize developer happiness. This was a shock to me when I switched over from Python. Check out this simple example emphasizing this philosophical difference: # A fight over philosophy: iterating over an array # Pythonic way for i in range: print# Ruby way, option 1.each do |i| puts i end # Ruby way, option 2 for i in 1..5 puts i end # Ruby way, option 3 5.times do |i| puts i + 1 end # Ruby way, option 4.each { |i| puts i } Another difference between the two is syntax style. Python primarily uses indentation to denote code blocks, while Ruby uses do…end or {…} blocks. Most include indentation inside Ruby blocks, but this is entirely optional. Examples of these syntactic differences can be seen in the code shown above. There are a lot of other little differences to learn. For example, in Python string interpolation is done using f-strings: f"Hello, {name}!", while in Ruby they are done using hashtags: "Hello, #{name}!". Within a few months, I think any competent Python programmer can transfer their proficiency over to Ruby. Recent AI-based Gems Despite not being in the conversation when discussing AI, Ruby has had some recent advancements in the world of gems. I will highlight some of the most impressive recent releases that we have been using in our agency to build AI apps: RubyLLM — Any GitHub repo that gets more than 2k stars within a few weeks of release deserves a mention, and RubyLLM is definitely worthy. I have used many clunky implementations of LLM providers from libraries like LangChain and LlamaIndex, so using RubyLLM was like a breath of fresh air. As a simple example, let’s take a look at a tutorial demonstrating multi-turn conversations: require 'ruby_llm' # Create a model and give it instructions chat = RubyLLM.chat chat.with_instructions "You are a friendly Ruby expert who loves to help beginners." # Multi-turn conversation chat.ask "Hi! What does attr_reader do in Ruby?" # => "Ruby creates a getter method for each symbol... # Stream responses in real time chat.ask "Could you give me a short example?" do |chunk| print chunk.content end # => "Sure! # ```ruby # class Person # attr... Simply amazing. Multi-turn conversations are handled automatically for you. Streaming is a breeze. Compare this to a similar implementation in LangChain: from langchain_openai import ChatOpenAI from langchain_core.schema import SystemMessage, HumanMessage, AIMessage from langchain_core.callbacks.streaming_stdout import StreamingStdOutCallbackHandler SYSTEM_PROMPT = "You are a friendly Ruby expert who loves to help beginners." chat = ChatOpenAI]) history =def ask-> None: """Stream the answer token-by-token and keep the context in memory.""" history.append) # .stream yields message chunks as they arrive for chunk in chat.stream: printprint# newline after the answer # the final chunk has the full message content history.append) askaskYikes. And it’s important to note that this is a grug implementation. Want to know how LangChain really expects you to manage memory? Check out these links, but grab a bucket first; you may get sick. Neighbors — This is an excellent library to use for nearest-neighbors search in a Rails application. Very useful in a RAG setup. It integrates with Postgres, SQLite, MySQL, MariaDB, and more. It was written by Andrew Kane, the same guy who wrote the pgvector extension that allows Postgres to behave as a vector database. Async — This gem had its first official release back in December 2024, and it has been making waves in the Ruby community. Async is a fiber-based framework for Ruby that runs non-blocking I/O tasks concurrently while letting you write simple, sequential code. Fibers are like mini-threads that each have their own mini call stack. While not strictly a gem for AI, it has helped us create features like web scrapers that run blazingly fast across thousands of pages. We have also used it to handle streaming of chunks from LLMs. Torch.rb — If you are interested in training deep learning models, then surely you have heard of PyTorch. Well, PyTorch is built on LibTorch, which essentially has a lot of C/C++ code under the hood to perform ML operations quickly. Andrew Kane took LibTorch and made a Ruby adapter over it to create Torch.rb, essentially a Ruby version of PyTorch. Andrew Kane has been a hero in the Ruby AI world, authoring dozens of ML gems for Ruby. Summary In short: building a web application with AI integration quickly and cheaply requires a monolithic architecture. A monolith demands a monolingual application, which is necessary if your end goal is quality apps delivered with speed. Your main options are either Python or Ruby. If you go with Python, you will probably use Django for your web framework. If you go with Ruby, you will be using Ruby on Rails. At our agency, we found Django’s lack of features disappointing. Rails has impressed us with its feature set and emphasis on simplicity. We were thrilled to find almost no issues on the AI side. Of course, there are times where you will not want to use Ruby. If you are conducting research in AI or training machine learning models from scratch, then you will likely want to stick with Python. Research almost never involves building Web Applications. At most you’ll build a simple interface or dashboard in a notebook, but nothing production-ready. You’ll likely want the latest PyTorch updates to ensure your training runs quickly. You may even dive into low-level C/C++ programming to squeeze as much performance as you can out of your hardware. Maybe you’ll even try your hand at Mojo. But if your goal is to integrate the latest LLMs — either open or closed source — into web applications, then we believe Ruby to be the far superior option. Give it a shot yourselves! In part three of this series, I will dive into a fun experiment: just how simple can we make a web application with AI integration? Stay tuned.  If you’d like a custom web application with generative AI integration, visit losangelesaiapps.com The post Building AI Applications in Ruby appeared first on Towards Data Science. #building #applications #ruby
    TOWARDSDATASCIENCE.COM
    Building AI Applications in Ruby
    This is the second in a multi-part series on creating web applications with generative AI integration. Part 1 focused on explaining the AI stack and why the application layer is the best place in the stack to be. Check it out here. Table of Contents Introduction I thought spas were supposed to be relaxing? Microservices are for Macrocompanies Ruby and Python: Two Sides of the Same Coin Recent AI Based Gems Summary Introduction It’s not often that you hear the Ruby language mentioned when discussing AI. Python, of course, is the king in this world, and for good reason. The community has coalesced around the language. Most model training is done in PyTorch or TensorFlow these days. Scikit-learn and Keras are also very popular. RAG frameworks such as LangChain and LlamaIndex cater primarily to Python. However, when it comes to building web applications with AI integration, I believe Ruby is the better language. As the co-founder of an agency dedicated to building MVPs with generative AI integration, I frequently hear potential clients complaining about two things: Applications take too long to build Developers are quoting insane prices to build custom web apps These complaints have a common source: complexity. Modern web apps have a lot more complexity in them than in the good ol’ days. But why is this? Are the benefits brought by complexity worth the cost? I thought spas were supposed to be relaxing? One big piece of the puzzle is the recent rise of single-page applications (SPAs). The most popular stack used today in building modern SPAs is MERN (MongoDB, Express.js, React.js, Node.js). The stack is popular for a few reasons: It is a JavaScript-only stack, across both front-end and back-end. Having to only code in only one language is pretty nice! SPAs can offer dynamic designs and a “smooth” user experience. Smooth here means that when some piece of data changes, only a part of the site is updated, as opposed to having to reload the whole page. Of course, if you don’t have a modern smartphone, SPAs won’t feel so smooth, as they tend to be pretty heavy. All that JavaScript starts to drag down the performance. There is a large ecosystem of libraries and developers with experience in this stack. This is pretty circular logic: is the stack popular because of the ecosystem, or is there an ecosystem because of the popularity? Either way, this point stands.React was created by Meta. Lots of money and effort has been thrown at the library, helping to polish and promote the product. Unfortunately, there are some downsides of working in the MERN stack, the most critical being the sheer complexity. Traditional web development was done using the Model-View-Controller (MVC) paradigm. In MVC, all of the logic managing a user’s session is handled in the backend, on the server. Something like fetching a user’s data was done via function calls and SQL statements in the backend. The backend then serves fully built HTML and CSS to the browser, which just has to display it. Hence the name “server”. In a SPA, this logic is handled on the user’s browser, in the frontend. SPAs have to handle UI state, application state, and sometimes even server state all in the browser. API calls have to be made to the backend to fetch user data. There is still quite a bit of logic on the backend, mainly exposing data and functionality through APIs. To illustrate the difference, let me use the analogy of a commercial kitchen. The customer will be the frontend and the kitchen will be the backend. MVCs vs. SPAs. Image generated by ChatGPT. Traditional MVC apps are like dining at a full-service restaurant. Yes, there is a lot of complexity (and yelling, if The Bear is to be believed) in the backend. But the frontend experience is simple and satisfying: all the customer has to do is pick up a fork and eat their food. SPAs are like eating at a buffet-style dining restaurant. There is still quite a bit of complexity in the kitchen. But now the customer also has to decide what food to grab, how to combine them, how to arrange them on the plate, where to put the plate when finished, etc. Andrej Karpathy had a tweet recently discussing his frustration with attempting to build web apps in 2025. It can be overwhelming for those new to the space. The reality of building web apps in 2025 is that it's a bit like assembling IKEA furniture. There's no "full-stack" product with batteries included, you have to piece together and configure many individual services:– frontend / backend (e.g. React, Next.js, APIs)– hosting…— Andrej Karpathy (@karpathy) March 27, 2025 In order to build MVPs with AI integration rapidly, our agency has decided to forgo the SPA and instead go with the traditional MVC approach. In particular, we have found Ruby on Rails (often denoted as Rails) to be the framework best suited to quickly developing and deploying quality apps with AI integration. Ruby on Rails was developed by David Heinemeier Hansson in 2004 and has long been known as a great web framework, but I would argue it has recently made leaps in its ability to incorporate AI into apps, as we will see. Django is the most popular Python web framework, and also has a more traditional pattern of development. Unfortunately, in our testing we found Django was simply not as full-featured or “batteries included” as Rails is. As a simple example, Django has no built-in background job system. Nearly all of our apps incorporate background jobs, so to not include this was disappointing. We also prefer how Rails emphasizes simplicity, with Rails 8 encouraging developers to easily self-host their apps instead of going through a provider like Heroku. They also recently released a stack of tools meant to replace external services like Redis. “But what about the smooth user experience?” you might ask. The truth is that modern Rails includes several ways of crafting SPA-like experiences without all of the heavy JavaScript. The primary tool is Hotwire, which bundles tools like Turbo and Stimulus. Turbo lets you dynamically change pieces of HTML on your webpage without writing custom JavaScript. For the times where you do need to include custom JavaScript, Stimulus is a minimal JavaScript framework that lets you do just that. Even if you want to use React, you can do so with the react-rails gem. So you can have your cake, and eat it too! SPAs are not the only reason for the increase in complexity, however. Another has to do with the advent of the microservices architecture. Microservices are for Macrocompanies Once again, we find ourselves comparing the simple past with the complexity of today. In the past, software was primarily developed as monoliths. A monolithic application means that all the different parts of your app — such as the user interface, business logic, and data handling — are developed, tested, and deployed as one single unit. The code is all typically housed in a single repo. Working with a monolith is simple and satisfying. Running a development setup for testing purposes is easy. You are working with a single database schema containing all of your tables, making queries and joins straightforward. Deployment is simple, since you just have one container to look at and modify. However, once your company scales to the size of a Google or Amazon, real problems begin to emerge. With hundreds or thousands of developers contributing simultaneously to a single codebase, coordinating changes and managing merge conflicts becomes increasingly difficult. Deployments also become more complex and risky, since even minor changes can blow up the entire application! To manage these issues, large companies began to coalesce around the microservices architecture. This is a style of programming where you design your codebase as a set of small, autonomous services. Each service owns its own codebase, data storage, and deployment pipelines. As a simple example, instead of stuffing all of your logic regarding an OpenAI client into your main app, you can move that logic into its own service. To call that service, you would then typically make REST calls, as opposed to function calls. This ups the complexity, but resolves the merge conflict and deployment issues, since each team in the organization gets to work on their own island of code. Another benefit to using microservices is that they allow for a polyglot tech stack. This means that each team can code up their service using whatever language they prefer. If one team prefers JavaScript while another likes Python, this is no issue. When we first began our agency, this idea of a polyglot stack pushed us to use a microservices architecture. Not because we had a large team, but because we each wanted to use the “best” language for each functionality. This meant: Using Ruby on Rails for web development. It’s been battle-tested in this area for decades. Using Python for the AI integration, perhaps deployed with something like FastAPI. Serious AI work requires Python, I was led to believe. Two different languages, each focused on its area of specialty. What could go wrong? Unfortunately, we found the process of development frustrating. Just setting up our dev environment was time-consuming. Having to wrangle Docker compose files and manage inter-service communication made us wish we could go back to the beauty and simplicity of the monolith. Having to make a REST call and set up the appropriate routing in FastAPI instead of making a simple function call sucked. “Surely we can’t develop AI apps in pure Ruby,” I thought. And then I gave it a try. And I’m glad I did. I found the process of developing an MVP with AI integration in Ruby very satisfying. We were able to sprint where before we were jogging. I loved the emphasis on beauty, simplicity, and developer happiness in the Ruby community. And I found the state of the AI ecosystem in Ruby to be surprisingly mature and getting better every day. If you are a Python programmer and are scared off by learning a new language like I was, let me comfort you by discussing the similarities between the Ruby and Python languages. Ruby and Python: Two Sides of the Same Coin I consider Python and Ruby to be like cousins. Both languages incorporate: High-level Interpretation: This means they abstract away a lot of the complexity of low-level programming details, such as memory management. Dynamic Typing: Neither language requires you to specify if a variable is an int, float, string, etc. The types are checked at runtime. Object-Oriented Programming: Both languages are object-oriented. Both support classes, inheritance, polymorphism, etc. Ruby is more “pure”, in the sense that literally everything is an object, whereas in Python a few things (such as if and for statements) are not objects. Readable and Concise Syntax: Both are considered easy to learn. Either is great for a first-time learner. Wide Ecosystem of Packages: Packages to do all sorts of cool things are available in both languages. In Python they are called libraries, and in Ruby they are called gems. The primary difference between the two languages lies in their philosophy and design principles. Python’s core philosophy can be described as: There should be one — and preferably only one — obvious way to do something. In theory, this should emphasize simplicity, readability, and clarity. Ruby’s philosophy can be described as: There’s always more than one way to do something. Maximize developer happiness. This was a shock to me when I switched over from Python. Check out this simple example emphasizing this philosophical difference: # A fight over philosophy: iterating over an array # Pythonic way for i in range(1, 6): print(i) # Ruby way, option 1 (1..5).each do |i| puts i end # Ruby way, option 2 for i in 1..5 puts i end # Ruby way, option 3 5.times do |i| puts i + 1 end # Ruby way, option 4 (1..5).each { |i| puts i } Another difference between the two is syntax style. Python primarily uses indentation to denote code blocks, while Ruby uses do…end or {…} blocks. Most include indentation inside Ruby blocks, but this is entirely optional. Examples of these syntactic differences can be seen in the code shown above. There are a lot of other little differences to learn. For example, in Python string interpolation is done using f-strings: f"Hello, {name}!", while in Ruby they are done using hashtags: "Hello, #{name}!". Within a few months, I think any competent Python programmer can transfer their proficiency over to Ruby. Recent AI-based Gems Despite not being in the conversation when discussing AI, Ruby has had some recent advancements in the world of gems. I will highlight some of the most impressive recent releases that we have been using in our agency to build AI apps: RubyLLM (link) — Any GitHub repo that gets more than 2k stars within a few weeks of release deserves a mention, and RubyLLM is definitely worthy. I have used many clunky implementations of LLM providers from libraries like LangChain and LlamaIndex, so using RubyLLM was like a breath of fresh air. As a simple example, let’s take a look at a tutorial demonstrating multi-turn conversations: require 'ruby_llm' # Create a model and give it instructions chat = RubyLLM.chat chat.with_instructions "You are a friendly Ruby expert who loves to help beginners." # Multi-turn conversation chat.ask "Hi! What does attr_reader do in Ruby?" # => "Ruby creates a getter method for each symbol... # Stream responses in real time chat.ask "Could you give me a short example?" do |chunk| print chunk.content end # => "Sure! # ```ruby # class Person # attr... Simply amazing. Multi-turn conversations are handled automatically for you. Streaming is a breeze. Compare this to a similar implementation in LangChain: from langchain_openai import ChatOpenAI from langchain_core.schema import SystemMessage, HumanMessage, AIMessage from langchain_core.callbacks.streaming_stdout import StreamingStdOutCallbackHandler SYSTEM_PROMPT = "You are a friendly Ruby expert who loves to help beginners." chat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()]) history = [SystemMessage(content=SYSTEM_PROMPT)] def ask(user_text: str) -> None: """Stream the answer token-by-token and keep the context in memory.""" history.append(HumanMessage(content=user_text)) # .stream yields message chunks as they arrive for chunk in chat.stream(history): print(chunk.content, end="", flush=True) print() # newline after the answer # the final chunk has the full message content history.append(AIMessage(content=chunk.content)) ask("Hi! What does attr_reader do in Ruby?") ask("Great - could you show a short example with attr_accessor?") Yikes. And it’s important to note that this is a grug implementation. Want to know how LangChain really expects you to manage memory? Check out these links, but grab a bucket first; you may get sick. Neighbors (link) — This is an excellent library to use for nearest-neighbors search in a Rails application. Very useful in a RAG setup. It integrates with Postgres, SQLite, MySQL, MariaDB, and more. It was written by Andrew Kane, the same guy who wrote the pgvector extension that allows Postgres to behave as a vector database. Async (link) — This gem had its first official release back in December 2024, and it has been making waves in the Ruby community. Async is a fiber-based framework for Ruby that runs non-blocking I/O tasks concurrently while letting you write simple, sequential code. Fibers are like mini-threads that each have their own mini call stack. While not strictly a gem for AI, it has helped us create features like web scrapers that run blazingly fast across thousands of pages. We have also used it to handle streaming of chunks from LLMs. Torch.rb (link) — If you are interested in training deep learning models, then surely you have heard of PyTorch. Well, PyTorch is built on LibTorch, which essentially has a lot of C/C++ code under the hood to perform ML operations quickly. Andrew Kane took LibTorch and made a Ruby adapter over it to create Torch.rb, essentially a Ruby version of PyTorch. Andrew Kane has been a hero in the Ruby AI world, authoring dozens of ML gems for Ruby. Summary In short: building a web application with AI integration quickly and cheaply requires a monolithic architecture. A monolith demands a monolingual application, which is necessary if your end goal is quality apps delivered with speed. Your main options are either Python or Ruby. If you go with Python, you will probably use Django for your web framework. If you go with Ruby, you will be using Ruby on Rails. At our agency, we found Django’s lack of features disappointing. Rails has impressed us with its feature set and emphasis on simplicity. We were thrilled to find almost no issues on the AI side. Of course, there are times where you will not want to use Ruby. If you are conducting research in AI or training machine learning models from scratch, then you will likely want to stick with Python. Research almost never involves building Web Applications. At most you’ll build a simple interface or dashboard in a notebook, but nothing production-ready. You’ll likely want the latest PyTorch updates to ensure your training runs quickly. You may even dive into low-level C/C++ programming to squeeze as much performance as you can out of your hardware. Maybe you’ll even try your hand at Mojo. But if your goal is to integrate the latest LLMs — either open or closed source — into web applications, then we believe Ruby to be the far superior option. Give it a shot yourselves! In part three of this series, I will dive into a fun experiment: just how simple can we make a web application with AI integration? Stay tuned.  If you’d like a custom web application with generative AI integration, visit losangelesaiapps.com The post Building AI Applications in Ruby appeared first on Towards Data Science.
    0 التعليقات 0 المشاركات
الصفحات المعززة