• Collaborators: Healthcare Innovation to Impact

    JONATHAN CARLSON: From the beginning, healthcare stood out to us as an important opportunity for general reasoners to improve the lives and experiences of patients and providers. Indeed, in the past two years, there’s been an explosion of scientific papers looking at the application first of text reasoners and medicine, then multi-modal reasoners that can interpret medical images, and now, most recently, healthcare agents that can reason with each other. But even more impressive than the pace of research has been the surprisingly rapid diffusion of this technology into real world clinical workflows. 
    LUNGREN: So today, we’ll talk about how our cross-company collaboration has shortened that gap and delivered advanced AI capabilities and solutions into the hands of developers and clinicians around the world, empowering everyone in health and life sciences to achieve more. I’m Doctor Matt Lungren, chief scientific officer for Microsoft Health and Life Sciences. 
    CARLSON: And I’m Jonathan Carlson, vice president and managing director of Microsoft Health Futures. 
    LUNGREN: And together we brought some key players leading in the space of AI and health
    CARLSON: We’ve asked these brilliant folks to join us because each of them represents a mission critical group of cutting-edge stakeholders, scaling breakthroughs into purpose-built solutions and capabilities for health
    LUNGREN: We’ll hear today how generative AI capabilities can unlock reasoning across every data type in medicine: text, images, waveforms, genomics. And further, how multi-agent frameworks in healthcare can accelerate complex workflows, in some cases acting as a specialist team member, safely secured inside the Microsoft 365 tools used by hundreds of millions of healthcare enterprise users across the world. The opportunity to save time today and lives tomorrow with AI has never been larger.  MATTHEW LUNGREN: Jonathan. You know, it’s been really interesting kind of observing Microsoft Research over the decades. I’ve, you know, been watching you guys in my prior academic career. You are always on the front of innovation, particularly in health
     JONATHAN CARLSON: I mean, it’s some of what’s in our DNA, I mean, we’ve been publishing in health and life sciences for two decades here. But when we launched Health Futures as a mission-focused lab about 7 or 8 years ago, we really started with the premise that the way to have impact was to really close the loop between, not just good ideas that get published, but good ideas that can actually be grounded in real problems that clinicians and scientists care about, that then allow us to actually go from that first proof of concept into an incubation, into getting real world feedback that allows us to close that loop. And now with, you know, the HLS organization here as a product group, we have the opportunity to work really closely with you all to not just prove what’s possible in the clinic or in the lab, but actually start scaling that into the broader community. 
    CAMERON RUNDE: And one thing I’ll add here is that the problems that we’re trying to tackle in health
    CARLSON: So, Matt, back to you. What are you guys doing in the product group? How do you guys see these models getting into the clinic?
    LUNGREN: You know, I think a lot of people, you know, think about AI is just, you know, maybe just even a few years old because of GPT and how that really captured the public’s consciousness. Right?
    And so, you think about the speech-to-text technology of being able to dictate something, for a clinic note or for a visit, that was typically based on Nuance technology. And so there’s a lot of product understanding of the market, how to deliver something that clinicians will use, understanding the pain points and workflows and really that Health IT space, which is sometimes the third rail, I feel like with a lot of innovation in healthcare. 
    But beyond that, I mean, I think now that we have this really powerful engine of Microsoft and the platform capabilities, we’re seeing, innovations on the healthcare side for data storage, data interoperability, with different types of medical data. You have new applications coming online, the ability, of course, to see generative AI now infused into the speech-to-text and, becoming Dragon Copilot, which is something that has been, you know, tremendously, received by the community. 
    Physicians are able to now just have a conversation with a patient. They turn to their computer and the note is ready for them. There’s no more this, we call it keyboard liberation. I don’t know if you heard that before. And that’s just been tremendous. And there’s so much more coming from that side. And then there’s other parts of the workflow that we also get engaged in — the diagnostic workflow.
    So medical imaging, sharing images across different hospital systems, the list goes on. And so now when you move into AI, we feel like there’s a huge opportunity to deliver capabilities into the clinical workflow via the products and solutions we already have. But, I mean, we’ll now that we’ve kind of expanded our team to involve Azure and platform, we’re really able to now focus on the developers.
    WILL GUYMAN: Yeah. And you’re always telling me as a doctor how frustrating it is to be spending time at the computer instead of with your patients. I think you told me, you know, 4,000 clicks a day for the typical doctor, which is tremendous. And something like Dragon Copilot can save that five minutes per patient. But it can also now take actions after the patient encounter so it can draft the after-visit summary. 
    It can order labs and medications for the referral. And that’s incredible. And we want to keep building on that. There’s so many other use cases across the ecosystem. And so that’s why in Azure AI Foundry, we have translated a lot of the research from Microsoft Research and made that available to developers to build and customize for their own applications. 
    SMITHA SALIGRAMA: Yeah. And as you were saying, in our transformation of moving from solutions to platforms and as, scaling solutions to other, multiple scenarios, as we put our models in AI Foundry, we provide these developer capabilities like bring your own data and fine
    LUNGREN: Well, I want to do a reality check because, you know, I think to us that are now really focused on technology, it seems like, I’ve heard this story before, right. I, I remember even in, my academic clinical days where it felt like technology was always the quick answer and it felt like technology was, there was maybe a disconnect between what my problems were or what I think needed to be done versus kind of the solutions that were kind of, created or offered to us. And I guess at some level, how Jonathan, do you think about this? Because to do things well in the science space is one thing, to do things well in science, but then also have it be something that actually drives health
    CARLSON: Yeah. I mean, as you said, I think one of the core pathologies of Big Tech is we assume every problem is a technology problem. And that’s all it will take to solve the problem. And I think, look, I was trained as a computational biologist, and that sits in the awkward middle between biology and computation. And the thing that we always have to remember, the thing that we were very acutely aware of when we set out, was that we are not the experts. We do have, you know, you as an M.D., we have everybody on the team, we have biologists on the team. 
    But this is a big space. And the only way we’re going to have real impact, the only way we’re even going to pick the right problems to work on is if we really partner deeply, with providers, with EHRvendors, with scientists, and really understand what’s important and again, get that feedback loop. 
    RUNDE: Yeah, I think we really need to ground the work that we do in the science itself. You need to understand the broader ecosystem and the broader landscape, across healthwe think are important. Because, as Jonathan said, we’re not the experts in health
    CARLSON: When we really launched this, this mission, 7 or 8 years ago, we really came in with the premise of, if we decide to stop, we want to be sure the world cares. And the only way that’s going to be true is if we’re really deeply embedded with the people that matter–the patients, the providers and the scientists.
    LUNGREN: And now it really feels like this collaborative effort, you know, really can help start to extend that mission. Right. I think, you know, Will and Smitha, that we definitely feel the passion and the innovation. And we certainly benefit from those collaborations, too. But then we have these other partners and even customers, right, that we can start to tap into and have that flywheel keep spinning. 
    GUYMAN: Yeah. And the whole industry is an ecosystem. So, we have our own data sets at Microsoft Research that you’ve trained amazing AI models with. And those are in the catalog. But then you’ve also partnered with institutions like Providence or Page AI . And those models are in the catalog with their data. And then there are third parties like Nvidia that have their own specialized proprietary data sets, and their models are there too. So, we have this ecosystem of open source models. And maybe Smitha, you want to talk about how developers can actually customize these. 
    SALIGRAMA: Yeah. So we use the Azure AI Foundry ecosystem. Developers can feel at home if they’re using the AI Foundry. So they can look at our model cards that we publish as part of the models we publish, understand the use cases of these models, how to, quickly, bring up these APIs and, look at different use cases of how to apply these and even fine
    LUNGREN: Yeah it has been interesting to see we have these health
    GUYMAN: Well, the general-purpose large language models are amazing for medical general reasoning. So Microsoft Research has shown that that they can perform super well on, for example, like the United States medical licensing exam, they can exceed doctor performance if they’re just picking between different multiple-choice questions. But real medicine we know is messier. It doesn’t always start with the whole patient context provided as text in the prompt. You have to get the source data and that raw data is often non-text. The majority of it is non-text. It’s things like medical imaging, radiology, pathology, ophthalmology, dermatology. It goes on and on. And there’s endless signal data, lab data. And so all of this diverse data type needs to be processed through specialized models because much of that data is not available on the public internet. 
    And that’s why we’re taking this partner approach, first party and third party models that can interpret all this kind of data and then connect them ultimately back to these general reasoners to reason over that. 
    LUNGREN: So, you know, I’ve been at this company for a while and, you know, familiar with kind of how long it takes, generally to get, you know, a really good research paper, do all the studies, do all the data analysis, and then go through the process of publishing, right, which takes, as, you know, a long time and it’s, you know, very rigorous. 
    And one of the things that struck me, last year, I think we, we started this big collaboration and, within a quarter, you had a Nature paper coming out from Microsoft Research, and that model that the Nature paper was describing was ready to be used by anyone on the Azure AI Foundry within that same quarter. It kind of blew my mind when I thought about it, you know, even though we were all, you know, working very hard to get that done. Any thoughts on that? I mean, has this ever happened in your career? And, you know, what’s the secret sauce to that? 
    CARLSON: Yeah, I mean, the time scale from research to product has been massively compressed. And I’d push that even further, which is to say, the reason why it took a quarter was because we were laying the railroad tracks as we’re driving the train. We have examples right after that when we are launching on Foundry the same day we were publishing the paper. 
    And frankly, the review times are becoming longer than it takes to actually productize the models. I think there’s two things that are going on with that are really converging. One is that the overall ecosystem is converging on a relatively small number of patterns, and that gives us, as a tech company, a reason to go off and really make those patterns hardened in a way that allows not just us, but third parties as well, to really have a nice workflow to publish these models. 
    But the other is actually, I think, a change in how we work, you know, and for most of our history as an industrial research lab, we would do research and then we’d go pitch it to somebody and try and throw it over the fence. We’ve really built a much more integrated team. In fact, if you look at that Nature paper or any of the other papers, there’s folks from product teams. Many of you are on the papers along with our clinical collaborators.
    RUNDE: Yeah. I think one thing that’s really important to note is that there’s a ton of different ways that you can have impact, right? So I like to think about phasing. In Health Futures at least, I like to think about phasing the work that we do. So first we have research, which is really early innovation. And the impact there is getting our technology and our tools out there and really sharing the learnings that we’ve had. 
    So that can be through publications like you mentioned. It can be through open-sourcing our models. And then you go to incubation. So, this is, I think, one of the more new spaces that we’re getting into, which is maybe that blurred line between research and product. Right. Which is, how do we take the tools and technologies that we’ve built and get them into the hands of users, typically through our partnerships? 
    Right. So, we partner very deeply and collaborate very deeply across the industry. And incubation is really important because we get that early feedback. We get an ability to pivot if we need to. And we also get the ability to see what types of impact our technology is having in the real world. And then lastly, when you think about scale, there’s tons of different ways that you can scale. We can scale third-party through our collaborators and really empower them to go to market to commercialize the things that we’ve built together. 
    You can also think about scaling internally, which is why I’m so thankful that we’ve created this flywheel between research and product, and a lot of the models that we’ve built that have gone through research, have gone through incubation, have been able to scale on the Azure AI Foundry. But that’s not really our expertise. Right? The scale piece in research, that’s research and incubation. Smitha, how do you think about scaling? 
    SALIGRAMA: So, there are several angles to scaling the models, the state-of-the-art models we see from the research team. The first angle is, the open sourcing, to get developer trust, and very generous commercial licenses so that they can use it and for their own, use cases. The second is, we also allow them to customize these models, fine
    GUYMAN: And as one example, you know, University of Wisconsin Health, you know, which Matt knows well. They took one of our models, which is highly versatile. They customized it in Foundry and they optimized it to reliably identify abnormal chest X-rays, the most common imaging procedure, so they could improve their turnaround time triage quickly. And that’s just one example. But we have other partners like Sectra who are doing more of operations use cases automatically routing imaging to the radiologists, setting them up to be efficient. And then Page AI is doing, you know, biomarker identification for actually diagnostics and new drug discovery. So, there’s so many use cases that we have partners already who are building and customizing.
    LUNGREN: The part that’s striking to me is just that, you know, we could all sit in a room and think about all the different ways someone might use these models on the catalog. And I’m still shocked at the stuff that people use them for and how effective they are. And I think part of that is, you know, again, we talk a lot about generative AI and healthcare and all the things you can do. Again, you know, in text, you refer to that earlier and certainly off the shelf, there’s really powerful applications. But there is, you know, kind of this tip of the iceberg effect where under the water, most of the data that we use to take care of our patients is not text. Right. It’s all the different other modalities. And I think that this has been an unlock right, sort of taking these innovations, innovations from the community, putting them in this ecosystem kind of catalog, essentially. Right. And then allowing folks to kind of, you know, build and develop applications with all these different types of data. Again, I’ve been surprised at what I’m seeing. 
    CARLSON: This has been just one of the most profound shifts that’s happened in the last 12 months, really. I mean, two years ago we had general models in text that really shifted how we think about, I mean, natural language processing got totally upended by that. Turns out the same technology works for images as well. It doesn’t only allow you to automatically extract concepts from images, but allows you to align those image concepts with text concepts, which means that you can have a conversation with that image. And once you’re in that world now, you are a place where you can start stitching together these multimodal models that really change how you can interact with the data, and how you can start getting more information out of the raw primary data that is part of the patient journey.
    LUNGREN: Well, and we’re going to get to that because I think you just touched on something. And I want to re-emphasize stitching these things together. There’s a lot of different ways to potentially do that. Right? There’s ways that you can literally train the model end to end with adapters and all kinds of other early fusion fusions. All kinds of ways. But one of the things that the word of the I guess the year is going to be agents and an agent is a very interesting term to think about how you might abstract away some of the components or the tasks that you want the model to, to accomplish in the midst of sort of a real human to maybe model interaction. Can you talk a little bit more about, how we’re thinking about agents in this, in this platform approach?  GUYMAN: Well, this is our newest addition to the Azure AI Foundry. So there’s an agent catalog now where we have a set of pre-configured agents for health care. And then we also have a multi-agent orchestrator that can jump
    LUNGREN: And, and I really like that concept because, you know, as, as a, as a from the user personas, I think about myself as a user. How am I going to interact with these agents? Where does it naturally fit? And I and I sort of, you know, I’ve seen some of the demonstrations and some of the work that’s going on with Stanford in particular, showing that, you know, and literally in a Teams chat, I can have my clinician colleagues and I can have specialized health
    It is a completely mind-blowing thing for me. And it’s a light bulb moment for me to I wonder, what have we, what have we heard from folks that have, you know, tried out this health care agent orchestrator in this kind of deployment environment via Teams?
    GUYMAN: Well, someone joked, you know, are you sure you’re not using Teams because you work at Microsoft?But, then we actually were meeting with one of the, radiologists at one of our partners, and they said that that morning they had just done a Teams meeting, or they had met with other specialists to talk about a patient’s cancer case, or they were coming up with a treatment plan. 
    And that was the light bulb moment for us. We realized, actually, Teams is already being used by physicians as an internal communication tool, as a tool to get work done. And especially since the pandemic, a lot of the meetings moved to virtual and telemedicine. And so it’s a great distribution channel for AI, which is often been a struggle for AI to actually get in the hands of clinicians. And so now we’re allowing developers to build and then deploy very easily and extend it into their own workflows. 
    CARLSON: I think that’s such an important point. I mean, if you think about one of the really important concepts in computer science is an application programing interface, like some set of rules that allow two applications to talk to each other. One of the big pushes, really important pushes, in medicine has been standards that allow us to actually have data standards and APIs that allow these to talk to each other, and yet still we end up with these silos. There’s silos of data. There’s silos of applications.
    And just like when you and I work on our phone, we have to go back and forth between applications. One of the things that I think agents do is that it takes the idea that now you can use language to understand intent and effectively program an interface, and it creates a whole new abstraction layer that allows us to simplify the interaction between not just humans and the endpoint, but also for developers. 
    It allows us to have this abstraction layer that lets different developers focus on different types of models, and yet stitch them all together in a very, very natural, way, not just for the users, but for the ability to actually deploy those models. 
    SALIGRAMA: Just to add to what Jonathan was mentioning, the other cool thing about the Microsoft Teams user interface is it’s also enterprise ready.
    RUNDE: And one important thing that we’re thinking about, is exactly this from the very early research through incubation and then to scale, obviously. Right. And so early on in research, we are actively working with our partners and our collaborators to make sure that we have the right data privacy and consent in place. We’re doing this in incubation as well. And then obviously in scale. Yep. 
    LUNGREN: So, I think AI has always been thought of as a savior kind of technology. We talked a little bit about how there’s been some ups and downs in terms of the ability for technology to be effective in health care. At the same time, we’re seeing a lot of new innovations that are really making a difference. But then we kind of get, you know, we talked about agents a little bit. It feels like we’re maybe abstracting too far. Maybe it’s things are going too fast, almost. What makes this different? I mean, in your mind is this truly a logical next step or is it going to take some time? 
    CARLSON: I think there’s a couple things that have happened. I think first, on just a pure technology. What led to ChatGPT? And I like to think of really three major breakthroughs.
    The first was new mathematical concepts of attention, which really means that we now have a way that a machine can figure out which parts of the context it should actually focus on, just the way our brains do. Right? I mean, if you’re a clinician and somebody is talking to you, the majority of that conversation is not relevant for the diagnosis. But, you know how to zoom in on the parts that matter. That’s a super powerful mathematical concept. The second one is this idea of self-supervision. So, I think one of the fundamental problems of machine learning has been that you have to train on labeled training data and labels are expensive, which means data sets are small, which means the final models are very narrow and brittle. And the idea of self-supervision is that you can just get a model to automatically learn concepts, and the language is just predict the next word. And what’s important about that is that leads to models that can actually manipulate and understand really messy text and pull out what’s important about that, and then and then stitch that back together in interesting ways.
    And the third concept, that came out of those first two, was just the observational scale. And that’s that more is better, more data, more compute, bigger models. And that really leads to a reason to keep investing. And for these models to keep getting better. So that as a as a groundwork, that’s what led to ChatGPT. That’s what led to our ability now to not just have rule-based systems or simple machine learning based systems to take a messy EHR record, say, and pull out a couple concepts.
    But to really feed the whole thing in and say, okay, I need you to figure out which concepts are in here. And is this particular attribute there, for example. That’s now led to the next breakthrough, which is all those core ideas apply to images as well. They apply to proteins, to DNA. And so we’re starting to see models that understand images and the concepts of images, and can actually map those back to text as well. 
    So, you can look at a pathology image and say, not just at the cell, but it appears that there’s some certain sort of cancer in this particular, tissue there. And then you take those two things together and you layer on the fact that now you have a model, or a set of models, that can understand intent, can understand human concepts and biomedical concepts, and you can start stitching them together into specialized agents that can actually reason with each other, which at some level gives you an API as a developer to say, okay, I need to focus on a pathology model and get this really, really, sound while somebody else is focusing on a radiology model, but now allows us to stitch these all together with a user interface that we can now talk to through natural language. 
    RUNDE: I’d like to double click a little bit on that medical abstraction piece that you mentioned. Just the amount of data, clinical data that there is for each individual patient. Let’s think about cancer patients for a second to make this real. Right. For every cancer patient, it could take a couple of hours to structure their information. And why is that important? Because, you have to get that information in a structured way and abstract relevant information to be able to unlock precision health applications right, for each patient. So, to be able to match them to a trial, right, someone has to sit there and go through all of the clinical notes from their entire patient care journey, from the beginning to the end. And that’s not scalable. And so one thing that we’ve been doing in an active project that we’ve been working on with a handful of our partners, but Providence specifically, I’ll call out, is using AI to actually abstract and curate that information. So that gives time back to the health care provider to spend with patients, instead of spending all their time curating this information. 
    And this is super important because it sets the scene and the backbone for all those precision health applications. Like I mentioned, clinical trial matching, tumor boards is another really important example here. Maybe Matt, you can talk to that a little bit.
    LUNGREN: It’s a great example. And you know it’s so funny. We’ve talked about this use case and the you know the health
    And a tumor board is a critical meeting that happens at many cancer centers where specialists all get together, come with their perspective, and make a comment on what would be the best next step in treatment. But the background in preparing for that is you know, again, organizing the data. But to your point, also, what are the clinical trials that are active? There are thousands of clinical trials. There’s hundreds every day added. How can anyone keep up with that? And these are the kinds of use cases that start to bubble up. And you realize that a technology that understands concepts, context and can reason over vast amounts of data with a language interface-that is a powerful tool. Even before we get to some of the, you know, unlocking new insights and even precision medicine, this is that idea of saving time before lives to me. And there’s an enormous amount of undifferentiated heavy lifting that happens in health
    GUYMAN: And we’ve packaged these agents, the manual abstraction work that, you know, manually takes hours. Now we have an agent. It’s in Foundry along with the clinical trial matching agent, which I think at Providence you showed could double the match rate over the baseline that they were using by using the AI for multiple data sources. So, we have that and then we have this orchestration that is using this really neat technology from Microsoft Research. Semantic Kernel, Magentic
    There’s turn taking, there’s negotiation between the agents. So, there’s this really interesting system that’s emerging. And again, this is all possible to be used through Teams. And there’s some great extensibility as well. We’ve been talking about that and working on some cool tools. 
    SALIGRAMA: Yeah. Yeah. No, I think if I have to geek out a little bit on how all this agent tech orchestrations are coming up, like I’ve been in software engineering for decades, it’s kind of a next version of distributed systems where you have these services that talk to each other. It’s a more natural way because LLMs are giving these natural ways instead of a structured API ways of conversing. We have these agents which can naturally understand how to talk to each other. Right. So this is like the next evolution of our systems now. And the way we’re packaging all of this is multiple ways based on all the standards and innovation that’s happening in this space. So, first of all, we are building these agents that are very good at specific tasks, like, Will was saying like, a trial matching agent or patient timeline agents. 
    So, we take all of these, and then we package it in a workflow and an orchestration. We use the standard, some of these coming from research. The Semantic Kernel, the Magentic-One. And then, all of these also allow us to extend these agents with custom agents that can be plugged in. So, we are open sourcing the entire agent orchestration in AI Foundry templates, so that developers can extend their own agents, and make their own workflows out of it. So, a lot of cool innovation happening to apply this technology to specific scenarios and workflows. 
    LUNGREN: Well, I was going to ask you, like, so as part of that extension. So, like, you know, folks can say, hey, I have maybe a really specific part of my workflow that I want to use some agents for, maybe one of the agents that can do PubMed literature search, for example. But then there’s also agents that, come in from the outside, you know, sort of like I could, I can imagine a software company or AI company that has a built-in agent that plugs in as well. 
    SALIGRAMA: Yeah. Yeah, absolutely. So, you can bring your own agent. And then we have these, standard ways of communicating with agents and integrating with the orchestration language so you can bring your own agent and extend this health care agent, agent orchestrator to your own needs. 
    LUNGREN: I can just think of, like, in a group chat, like a bunch of different specialist agents. And I really would want an orchestrator to help find the right tool, to your point earlier, because I’m guessing this ecosystem is going to expand quickly. Yeah. And I may not know which tool is best for which question. I just want to ask the question. Right. 
    SALIGRAMA: Yeah. Yeah. 
    CARLSON: Well, I think to that point to I mean, you said an important point here, which is tools, and these are not necessarily just AI tools. Right? I mean, we’ve known this for a while, right? LLMS are not very good at math, but you can have it use a calculator and then it works very well. And you know you guys both brought up the universal medical abstraction a couple times. 
    And one of the things that I find so powerful about that is we’ve long had this vision within the precision health community that we should be able to have a learning hospital system. We should be able to actually learn from the actual real clinical experiences that are happening every day, so that we can stop practicing medicine based off averages. 
    There’s a lot of work that’s gone on for the last 20 years about how to actually do causal inference. That’s not an AI question. That’s a statistical question. The bottleneck, the reason why we haven’t been able to do that is because most of that information is locked up in unstructured text. And these other tools need essentially a table. 
    And so now you can decompose this problem, say, well, what if I can use AI not to get to the causal answer, but to just structure the information. So now I can put it into the causal inference tool. And these sorts of patterns I think again become very, not just powerful for a programmer, but they start pulling together different specialties. And I think we’ll really see an acceleration, really, of collaboration across disciplines because of this. 
    CARLSON: So, when I joined Microsoft Research 18 years ago, I was doing work in computational biology. And I would always have to answer the question: why is Microsoft in biomedicine? And I would always kind of joke saying, well, it is. We sell Office and Windows to every health
    SALIGRAMA: A lot of healthcare organizations already use Microsoft productivity tools, as you mentioned. So, they asked the developers, build these agents, and use our healthcare orchestrations, to plug in these agents and expose these in these productivity tools. They will get access to all these healthcare workers. So the healthcare agent orchestrator we have today integrates with Microsoft Teams, and it showcases an example of how you can atmention these agents and talk to them like you were talking to another person in a Teams chat. And then it also provides examples of these agents and how they can use these productivity tools. One of the examples we have there is how they can summarize the assessments of this whole chat into a Word Doc, or even convert that into a PowerPoint presentation, for later on.
    CARLSON: One of the things that has struck me is how easy it is to do. I mean, Will, I don’t know if you’ve worked with folks that have gone from 0 to 60, like, how fast? What does that look like? 
    GUYMAN: Yeah, it’s funny for us, the technology to transfer all this context into a Word Document or PowerPoint presentation for a doctor to take to a meeting is relatively straightforward compared to the complicated clinical trial matching multimodal processing. The feedback has been tremendous in terms of, wow, that saves so much time to have this organized report that then I can show up to meeting with and the agents can come with me to that meeting because they’re literally having a Teams meeting, often with other human specialists. And the agents can be there and ask and answer questions and fact check and source all the right information on the fly. So, there’s a nice integration into these existing tools. 
    LUNGREN: We worked with several different centers just to kind of understand, you know, where this might be useful. And, like, as I think we talked about before, the ideas that we’ve come up with again, this is a great one because it’s complex. It’s kind of hairy. There’s a lot of things happening under the hood that don’t necessarily require a medical license to do, right, to prepare for a tumor board and to organize data. But, it’s fascinating, actually. So, you know, folks have come up with ideas of, could I have an agent that can operate an MRI machine, and I can ask the agent to change some parameters or redo a protocol. We thought that was a pretty powerful use case. We’ve had others that have just said, you know, I really want to have a specific agent that’s able to kind of act like deep research does for the consumer side, but based on the context of my patient, so that it can search all the literature and pull the data in the papers that are relevant to this case. And the list goes on and on from operations all the way to clinical, you know, sort of decision making at some level. And I think that the research community that’s going to sprout around this will help us, guide us, I guess, to see what is the most high-impact use cases. Where is this effective? And maybe where it’s not effective.
    But to me, the part that makes me so, I guess excited about this is just that I don’t have to think about, okay, well, then we have to figure out Health IT. Because it’s always, you know, we always have great ideas and research, and it always feels like there’s such a huge chasm to get it in front of the health care workers that might want to test this out. And it feels like, again, this productivity tool use case again with the enterprise security, the possibility for bringing in third parties to contribute really does feel like it’s a new surface area for innovation.
    CARLSON: Yeah, I love that. Look. Let me end by putting you all on the spot. So, in three years, multimodal agents will do what? Matt, I’ll start with you. 
    LUNGREN: I am convinced that it’s going to save massive amount of time before it saves many lives. 
    RUNDE: I’ll focus on the patient care journey and diagnostic journey. I think it will kind of transform that process for the patient itself and shorten that process. 
    GUYMAN: Yeah, I think we’ve seen already papers recently showing that different modalities surfaced complementary information. And so we’ll see kind of this AI and these agents becoming an essential companion to the physician, surfacing insights that would have been overlooked otherwise. 
    SALIGRAMA: And similar to what you guys were saying, agents will become important assistants to healthcare workers, reducing a lot of documentation and workflow, excess work they have to do. 
    CARLSON: I love that. And I guess for my part, I think really what we’re going to see is a massive unleash of creativity. We’ve had a lot of folks that have been innovating in this space, but they haven’t had a way to actually get it into the hands of early adopters. And I think we’re going to see that really lead to an explosion of creativity across the ecosystem. 
    LUNGREN: So, where do we get started? Like where are the developers who are listening to this? The folks that are at, you know, labs, research labs and developing health care solutions. Where do they go to get started with the Foundry, the models we’ve talked about, the healthcare agent orchestrator. Where do they go?
    GUYMAN: So AI.azure.com is the AI Foundry. It’s a website you can go as a developer. You can sign in with your Azure subscription, get your Azure account, your own VM, all that stuff. And you have an agent catalog, the model catalog. You can start from there. There is documentation and templates that you can then deploy to Teams or other applications. 
    LUNGREN: And tutorials are coming. Right. We have recordings of tutorials. We’ll have Hackathons, some sessions and then more to come. Yeah, we’re really excited.  
    LUNGREN: Thank you so much, guys for joining us. 
    CARLSON: Yes. Yeah. Thanks. 
    SALIGRAMA: Thanks for having us.  
    #collaborators #healthcare #innovation #impact
    Collaborators: Healthcare Innovation to Impact
    JONATHAN CARLSON: From the beginning, healthcare stood out to us as an important opportunity for general reasoners to improve the lives and experiences of patients and providers. Indeed, in the past two years, there’s been an explosion of scientific papers looking at the application first of text reasoners and medicine, then multi-modal reasoners that can interpret medical images, and now, most recently, healthcare agents that can reason with each other. But even more impressive than the pace of research has been the surprisingly rapid diffusion of this technology into real world clinical workflows.  LUNGREN: So today, we’ll talk about how our cross-company collaboration has shortened that gap and delivered advanced AI capabilities and solutions into the hands of developers and clinicians around the world, empowering everyone in health and life sciences to achieve more. I’m Doctor Matt Lungren, chief scientific officer for Microsoft Health and Life Sciences.  CARLSON: And I’m Jonathan Carlson, vice president and managing director of Microsoft Health Futures.  LUNGREN: And together we brought some key players leading in the space of AI and health CARLSON: We’ve asked these brilliant folks to join us because each of them represents a mission critical group of cutting-edge stakeholders, scaling breakthroughs into purpose-built solutions and capabilities for health LUNGREN: We’ll hear today how generative AI capabilities can unlock reasoning across every data type in medicine: text, images, waveforms, genomics. And further, how multi-agent frameworks in healthcare can accelerate complex workflows, in some cases acting as a specialist team member, safely secured inside the Microsoft 365 tools used by hundreds of millions of healthcare enterprise users across the world. The opportunity to save time today and lives tomorrow with AI has never been larger.  MATTHEW LUNGREN: Jonathan. You know, it’s been really interesting kind of observing Microsoft Research over the decades. I’ve, you know, been watching you guys in my prior academic career. You are always on the front of innovation, particularly in health  JONATHAN CARLSON: I mean, it’s some of what’s in our DNA, I mean, we’ve been publishing in health and life sciences for two decades here. But when we launched Health Futures as a mission-focused lab about 7 or 8 years ago, we really started with the premise that the way to have impact was to really close the loop between, not just good ideas that get published, but good ideas that can actually be grounded in real problems that clinicians and scientists care about, that then allow us to actually go from that first proof of concept into an incubation, into getting real world feedback that allows us to close that loop. And now with, you know, the HLS organization here as a product group, we have the opportunity to work really closely with you all to not just prove what’s possible in the clinic or in the lab, but actually start scaling that into the broader community.  CAMERON RUNDE: And one thing I’ll add here is that the problems that we’re trying to tackle in health CARLSON: So, Matt, back to you. What are you guys doing in the product group? How do you guys see these models getting into the clinic? LUNGREN: You know, I think a lot of people, you know, think about AI is just, you know, maybe just even a few years old because of GPT and how that really captured the public’s consciousness. Right? And so, you think about the speech-to-text technology of being able to dictate something, for a clinic note or for a visit, that was typically based on Nuance technology. And so there’s a lot of product understanding of the market, how to deliver something that clinicians will use, understanding the pain points and workflows and really that Health IT space, which is sometimes the third rail, I feel like with a lot of innovation in healthcare.  But beyond that, I mean, I think now that we have this really powerful engine of Microsoft and the platform capabilities, we’re seeing, innovations on the healthcare side for data storage, data interoperability, with different types of medical data. You have new applications coming online, the ability, of course, to see generative AI now infused into the speech-to-text and, becoming Dragon Copilot, which is something that has been, you know, tremendously, received by the community.  Physicians are able to now just have a conversation with a patient. They turn to their computer and the note is ready for them. There’s no more this, we call it keyboard liberation. I don’t know if you heard that before. And that’s just been tremendous. And there’s so much more coming from that side. And then there’s other parts of the workflow that we also get engaged in — the diagnostic workflow. So medical imaging, sharing images across different hospital systems, the list goes on. And so now when you move into AI, we feel like there’s a huge opportunity to deliver capabilities into the clinical workflow via the products and solutions we already have. But, I mean, we’ll now that we’ve kind of expanded our team to involve Azure and platform, we’re really able to now focus on the developers. WILL GUYMAN: Yeah. And you’re always telling me as a doctor how frustrating it is to be spending time at the computer instead of with your patients. I think you told me, you know, 4,000 clicks a day for the typical doctor, which is tremendous. And something like Dragon Copilot can save that five minutes per patient. But it can also now take actions after the patient encounter so it can draft the after-visit summary.  It can order labs and medications for the referral. And that’s incredible. And we want to keep building on that. There’s so many other use cases across the ecosystem. And so that’s why in Azure AI Foundry, we have translated a lot of the research from Microsoft Research and made that available to developers to build and customize for their own applications.  SMITHA SALIGRAMA: Yeah. And as you were saying, in our transformation of moving from solutions to platforms and as, scaling solutions to other, multiple scenarios, as we put our models in AI Foundry, we provide these developer capabilities like bring your own data and fine LUNGREN: Well, I want to do a reality check because, you know, I think to us that are now really focused on technology, it seems like, I’ve heard this story before, right. I, I remember even in, my academic clinical days where it felt like technology was always the quick answer and it felt like technology was, there was maybe a disconnect between what my problems were or what I think needed to be done versus kind of the solutions that were kind of, created or offered to us. And I guess at some level, how Jonathan, do you think about this? Because to do things well in the science space is one thing, to do things well in science, but then also have it be something that actually drives health CARLSON: Yeah. I mean, as you said, I think one of the core pathologies of Big Tech is we assume every problem is a technology problem. And that’s all it will take to solve the problem. And I think, look, I was trained as a computational biologist, and that sits in the awkward middle between biology and computation. And the thing that we always have to remember, the thing that we were very acutely aware of when we set out, was that we are not the experts. We do have, you know, you as an M.D., we have everybody on the team, we have biologists on the team.  But this is a big space. And the only way we’re going to have real impact, the only way we’re even going to pick the right problems to work on is if we really partner deeply, with providers, with EHRvendors, with scientists, and really understand what’s important and again, get that feedback loop.  RUNDE: Yeah, I think we really need to ground the work that we do in the science itself. You need to understand the broader ecosystem and the broader landscape, across healthwe think are important. Because, as Jonathan said, we’re not the experts in health CARLSON: When we really launched this, this mission, 7 or 8 years ago, we really came in with the premise of, if we decide to stop, we want to be sure the world cares. And the only way that’s going to be true is if we’re really deeply embedded with the people that matter–the patients, the providers and the scientists. LUNGREN: And now it really feels like this collaborative effort, you know, really can help start to extend that mission. Right. I think, you know, Will and Smitha, that we definitely feel the passion and the innovation. And we certainly benefit from those collaborations, too. But then we have these other partners and even customers, right, that we can start to tap into and have that flywheel keep spinning.  GUYMAN: Yeah. And the whole industry is an ecosystem. So, we have our own data sets at Microsoft Research that you’ve trained amazing AI models with. And those are in the catalog. But then you’ve also partnered with institutions like Providence or Page AI . And those models are in the catalog with their data. And then there are third parties like Nvidia that have their own specialized proprietary data sets, and their models are there too. So, we have this ecosystem of open source models. And maybe Smitha, you want to talk about how developers can actually customize these.  SALIGRAMA: Yeah. So we use the Azure AI Foundry ecosystem. Developers can feel at home if they’re using the AI Foundry. So they can look at our model cards that we publish as part of the models we publish, understand the use cases of these models, how to, quickly, bring up these APIs and, look at different use cases of how to apply these and even fine LUNGREN: Yeah it has been interesting to see we have these health GUYMAN: Well, the general-purpose large language models are amazing for medical general reasoning. So Microsoft Research has shown that that they can perform super well on, for example, like the United States medical licensing exam, they can exceed doctor performance if they’re just picking between different multiple-choice questions. But real medicine we know is messier. It doesn’t always start with the whole patient context provided as text in the prompt. You have to get the source data and that raw data is often non-text. The majority of it is non-text. It’s things like medical imaging, radiology, pathology, ophthalmology, dermatology. It goes on and on. And there’s endless signal data, lab data. And so all of this diverse data type needs to be processed through specialized models because much of that data is not available on the public internet.  And that’s why we’re taking this partner approach, first party and third party models that can interpret all this kind of data and then connect them ultimately back to these general reasoners to reason over that.  LUNGREN: So, you know, I’ve been at this company for a while and, you know, familiar with kind of how long it takes, generally to get, you know, a really good research paper, do all the studies, do all the data analysis, and then go through the process of publishing, right, which takes, as, you know, a long time and it’s, you know, very rigorous.  And one of the things that struck me, last year, I think we, we started this big collaboration and, within a quarter, you had a Nature paper coming out from Microsoft Research, and that model that the Nature paper was describing was ready to be used by anyone on the Azure AI Foundry within that same quarter. It kind of blew my mind when I thought about it, you know, even though we were all, you know, working very hard to get that done. Any thoughts on that? I mean, has this ever happened in your career? And, you know, what’s the secret sauce to that?  CARLSON: Yeah, I mean, the time scale from research to product has been massively compressed. And I’d push that even further, which is to say, the reason why it took a quarter was because we were laying the railroad tracks as we’re driving the train. We have examples right after that when we are launching on Foundry the same day we were publishing the paper.  And frankly, the review times are becoming longer than it takes to actually productize the models. I think there’s two things that are going on with that are really converging. One is that the overall ecosystem is converging on a relatively small number of patterns, and that gives us, as a tech company, a reason to go off and really make those patterns hardened in a way that allows not just us, but third parties as well, to really have a nice workflow to publish these models.  But the other is actually, I think, a change in how we work, you know, and for most of our history as an industrial research lab, we would do research and then we’d go pitch it to somebody and try and throw it over the fence. We’ve really built a much more integrated team. In fact, if you look at that Nature paper or any of the other papers, there’s folks from product teams. Many of you are on the papers along with our clinical collaborators. RUNDE: Yeah. I think one thing that’s really important to note is that there’s a ton of different ways that you can have impact, right? So I like to think about phasing. In Health Futures at least, I like to think about phasing the work that we do. So first we have research, which is really early innovation. And the impact there is getting our technology and our tools out there and really sharing the learnings that we’ve had.  So that can be through publications like you mentioned. It can be through open-sourcing our models. And then you go to incubation. So, this is, I think, one of the more new spaces that we’re getting into, which is maybe that blurred line between research and product. Right. Which is, how do we take the tools and technologies that we’ve built and get them into the hands of users, typically through our partnerships?  Right. So, we partner very deeply and collaborate very deeply across the industry. And incubation is really important because we get that early feedback. We get an ability to pivot if we need to. And we also get the ability to see what types of impact our technology is having in the real world. And then lastly, when you think about scale, there’s tons of different ways that you can scale. We can scale third-party through our collaborators and really empower them to go to market to commercialize the things that we’ve built together.  You can also think about scaling internally, which is why I’m so thankful that we’ve created this flywheel between research and product, and a lot of the models that we’ve built that have gone through research, have gone through incubation, have been able to scale on the Azure AI Foundry. But that’s not really our expertise. Right? The scale piece in research, that’s research and incubation. Smitha, how do you think about scaling?  SALIGRAMA: So, there are several angles to scaling the models, the state-of-the-art models we see from the research team. The first angle is, the open sourcing, to get developer trust, and very generous commercial licenses so that they can use it and for their own, use cases. The second is, we also allow them to customize these models, fine GUYMAN: And as one example, you know, University of Wisconsin Health, you know, which Matt knows well. They took one of our models, which is highly versatile. They customized it in Foundry and they optimized it to reliably identify abnormal chest X-rays, the most common imaging procedure, so they could improve their turnaround time triage quickly. And that’s just one example. But we have other partners like Sectra who are doing more of operations use cases automatically routing imaging to the radiologists, setting them up to be efficient. And then Page AI is doing, you know, biomarker identification for actually diagnostics and new drug discovery. So, there’s so many use cases that we have partners already who are building and customizing. LUNGREN: The part that’s striking to me is just that, you know, we could all sit in a room and think about all the different ways someone might use these models on the catalog. And I’m still shocked at the stuff that people use them for and how effective they are. And I think part of that is, you know, again, we talk a lot about generative AI and healthcare and all the things you can do. Again, you know, in text, you refer to that earlier and certainly off the shelf, there’s really powerful applications. But there is, you know, kind of this tip of the iceberg effect where under the water, most of the data that we use to take care of our patients is not text. Right. It’s all the different other modalities. And I think that this has been an unlock right, sort of taking these innovations, innovations from the community, putting them in this ecosystem kind of catalog, essentially. Right. And then allowing folks to kind of, you know, build and develop applications with all these different types of data. Again, I’ve been surprised at what I’m seeing.  CARLSON: This has been just one of the most profound shifts that’s happened in the last 12 months, really. I mean, two years ago we had general models in text that really shifted how we think about, I mean, natural language processing got totally upended by that. Turns out the same technology works for images as well. It doesn’t only allow you to automatically extract concepts from images, but allows you to align those image concepts with text concepts, which means that you can have a conversation with that image. And once you’re in that world now, you are a place where you can start stitching together these multimodal models that really change how you can interact with the data, and how you can start getting more information out of the raw primary data that is part of the patient journey. LUNGREN: Well, and we’re going to get to that because I think you just touched on something. And I want to re-emphasize stitching these things together. There’s a lot of different ways to potentially do that. Right? There’s ways that you can literally train the model end to end with adapters and all kinds of other early fusion fusions. All kinds of ways. But one of the things that the word of the I guess the year is going to be agents and an agent is a very interesting term to think about how you might abstract away some of the components or the tasks that you want the model to, to accomplish in the midst of sort of a real human to maybe model interaction. Can you talk a little bit more about, how we’re thinking about agents in this, in this platform approach?  GUYMAN: Well, this is our newest addition to the Azure AI Foundry. So there’s an agent catalog now where we have a set of pre-configured agents for health care. And then we also have a multi-agent orchestrator that can jump LUNGREN: And, and I really like that concept because, you know, as, as a, as a from the user personas, I think about myself as a user. How am I going to interact with these agents? Where does it naturally fit? And I and I sort of, you know, I’ve seen some of the demonstrations and some of the work that’s going on with Stanford in particular, showing that, you know, and literally in a Teams chat, I can have my clinician colleagues and I can have specialized health It is a completely mind-blowing thing for me. And it’s a light bulb moment for me to I wonder, what have we, what have we heard from folks that have, you know, tried out this health care agent orchestrator in this kind of deployment environment via Teams? GUYMAN: Well, someone joked, you know, are you sure you’re not using Teams because you work at Microsoft?But, then we actually were meeting with one of the, radiologists at one of our partners, and they said that that morning they had just done a Teams meeting, or they had met with other specialists to talk about a patient’s cancer case, or they were coming up with a treatment plan.  And that was the light bulb moment for us. We realized, actually, Teams is already being used by physicians as an internal communication tool, as a tool to get work done. And especially since the pandemic, a lot of the meetings moved to virtual and telemedicine. And so it’s a great distribution channel for AI, which is often been a struggle for AI to actually get in the hands of clinicians. And so now we’re allowing developers to build and then deploy very easily and extend it into their own workflows.  CARLSON: I think that’s such an important point. I mean, if you think about one of the really important concepts in computer science is an application programing interface, like some set of rules that allow two applications to talk to each other. One of the big pushes, really important pushes, in medicine has been standards that allow us to actually have data standards and APIs that allow these to talk to each other, and yet still we end up with these silos. There’s silos of data. There’s silos of applications. And just like when you and I work on our phone, we have to go back and forth between applications. One of the things that I think agents do is that it takes the idea that now you can use language to understand intent and effectively program an interface, and it creates a whole new abstraction layer that allows us to simplify the interaction between not just humans and the endpoint, but also for developers.  It allows us to have this abstraction layer that lets different developers focus on different types of models, and yet stitch them all together in a very, very natural, way, not just for the users, but for the ability to actually deploy those models.  SALIGRAMA: Just to add to what Jonathan was mentioning, the other cool thing about the Microsoft Teams user interface is it’s also enterprise ready. RUNDE: And one important thing that we’re thinking about, is exactly this from the very early research through incubation and then to scale, obviously. Right. And so early on in research, we are actively working with our partners and our collaborators to make sure that we have the right data privacy and consent in place. We’re doing this in incubation as well. And then obviously in scale. Yep.  LUNGREN: So, I think AI has always been thought of as a savior kind of technology. We talked a little bit about how there’s been some ups and downs in terms of the ability for technology to be effective in health care. At the same time, we’re seeing a lot of new innovations that are really making a difference. But then we kind of get, you know, we talked about agents a little bit. It feels like we’re maybe abstracting too far. Maybe it’s things are going too fast, almost. What makes this different? I mean, in your mind is this truly a logical next step or is it going to take some time?  CARLSON: I think there’s a couple things that have happened. I think first, on just a pure technology. What led to ChatGPT? And I like to think of really three major breakthroughs. The first was new mathematical concepts of attention, which really means that we now have a way that a machine can figure out which parts of the context it should actually focus on, just the way our brains do. Right? I mean, if you’re a clinician and somebody is talking to you, the majority of that conversation is not relevant for the diagnosis. But, you know how to zoom in on the parts that matter. That’s a super powerful mathematical concept. The second one is this idea of self-supervision. So, I think one of the fundamental problems of machine learning has been that you have to train on labeled training data and labels are expensive, which means data sets are small, which means the final models are very narrow and brittle. And the idea of self-supervision is that you can just get a model to automatically learn concepts, and the language is just predict the next word. And what’s important about that is that leads to models that can actually manipulate and understand really messy text and pull out what’s important about that, and then and then stitch that back together in interesting ways. And the third concept, that came out of those first two, was just the observational scale. And that’s that more is better, more data, more compute, bigger models. And that really leads to a reason to keep investing. And for these models to keep getting better. So that as a as a groundwork, that’s what led to ChatGPT. That’s what led to our ability now to not just have rule-based systems or simple machine learning based systems to take a messy EHR record, say, and pull out a couple concepts. But to really feed the whole thing in and say, okay, I need you to figure out which concepts are in here. And is this particular attribute there, for example. That’s now led to the next breakthrough, which is all those core ideas apply to images as well. They apply to proteins, to DNA. And so we’re starting to see models that understand images and the concepts of images, and can actually map those back to text as well.  So, you can look at a pathology image and say, not just at the cell, but it appears that there’s some certain sort of cancer in this particular, tissue there. And then you take those two things together and you layer on the fact that now you have a model, or a set of models, that can understand intent, can understand human concepts and biomedical concepts, and you can start stitching them together into specialized agents that can actually reason with each other, which at some level gives you an API as a developer to say, okay, I need to focus on a pathology model and get this really, really, sound while somebody else is focusing on a radiology model, but now allows us to stitch these all together with a user interface that we can now talk to through natural language.  RUNDE: I’d like to double click a little bit on that medical abstraction piece that you mentioned. Just the amount of data, clinical data that there is for each individual patient. Let’s think about cancer patients for a second to make this real. Right. For every cancer patient, it could take a couple of hours to structure their information. And why is that important? Because, you have to get that information in a structured way and abstract relevant information to be able to unlock precision health applications right, for each patient. So, to be able to match them to a trial, right, someone has to sit there and go through all of the clinical notes from their entire patient care journey, from the beginning to the end. And that’s not scalable. And so one thing that we’ve been doing in an active project that we’ve been working on with a handful of our partners, but Providence specifically, I’ll call out, is using AI to actually abstract and curate that information. So that gives time back to the health care provider to spend with patients, instead of spending all their time curating this information.  And this is super important because it sets the scene and the backbone for all those precision health applications. Like I mentioned, clinical trial matching, tumor boards is another really important example here. Maybe Matt, you can talk to that a little bit. LUNGREN: It’s a great example. And you know it’s so funny. We’ve talked about this use case and the you know the health And a tumor board is a critical meeting that happens at many cancer centers where specialists all get together, come with their perspective, and make a comment on what would be the best next step in treatment. But the background in preparing for that is you know, again, organizing the data. But to your point, also, what are the clinical trials that are active? There are thousands of clinical trials. There’s hundreds every day added. How can anyone keep up with that? And these are the kinds of use cases that start to bubble up. And you realize that a technology that understands concepts, context and can reason over vast amounts of data with a language interface-that is a powerful tool. Even before we get to some of the, you know, unlocking new insights and even precision medicine, this is that idea of saving time before lives to me. And there’s an enormous amount of undifferentiated heavy lifting that happens in health GUYMAN: And we’ve packaged these agents, the manual abstraction work that, you know, manually takes hours. Now we have an agent. It’s in Foundry along with the clinical trial matching agent, which I think at Providence you showed could double the match rate over the baseline that they were using by using the AI for multiple data sources. So, we have that and then we have this orchestration that is using this really neat technology from Microsoft Research. Semantic Kernel, Magentic There’s turn taking, there’s negotiation between the agents. So, there’s this really interesting system that’s emerging. And again, this is all possible to be used through Teams. And there’s some great extensibility as well. We’ve been talking about that and working on some cool tools.  SALIGRAMA: Yeah. Yeah. No, I think if I have to geek out a little bit on how all this agent tech orchestrations are coming up, like I’ve been in software engineering for decades, it’s kind of a next version of distributed systems where you have these services that talk to each other. It’s a more natural way because LLMs are giving these natural ways instead of a structured API ways of conversing. We have these agents which can naturally understand how to talk to each other. Right. So this is like the next evolution of our systems now. And the way we’re packaging all of this is multiple ways based on all the standards and innovation that’s happening in this space. So, first of all, we are building these agents that are very good at specific tasks, like, Will was saying like, a trial matching agent or patient timeline agents.  So, we take all of these, and then we package it in a workflow and an orchestration. We use the standard, some of these coming from research. The Semantic Kernel, the Magentic-One. And then, all of these also allow us to extend these agents with custom agents that can be plugged in. So, we are open sourcing the entire agent orchestration in AI Foundry templates, so that developers can extend their own agents, and make their own workflows out of it. So, a lot of cool innovation happening to apply this technology to specific scenarios and workflows.  LUNGREN: Well, I was going to ask you, like, so as part of that extension. So, like, you know, folks can say, hey, I have maybe a really specific part of my workflow that I want to use some agents for, maybe one of the agents that can do PubMed literature search, for example. But then there’s also agents that, come in from the outside, you know, sort of like I could, I can imagine a software company or AI company that has a built-in agent that plugs in as well.  SALIGRAMA: Yeah. Yeah, absolutely. So, you can bring your own agent. And then we have these, standard ways of communicating with agents and integrating with the orchestration language so you can bring your own agent and extend this health care agent, agent orchestrator to your own needs.  LUNGREN: I can just think of, like, in a group chat, like a bunch of different specialist agents. And I really would want an orchestrator to help find the right tool, to your point earlier, because I’m guessing this ecosystem is going to expand quickly. Yeah. And I may not know which tool is best for which question. I just want to ask the question. Right.  SALIGRAMA: Yeah. Yeah.  CARLSON: Well, I think to that point to I mean, you said an important point here, which is tools, and these are not necessarily just AI tools. Right? I mean, we’ve known this for a while, right? LLMS are not very good at math, but you can have it use a calculator and then it works very well. And you know you guys both brought up the universal medical abstraction a couple times.  And one of the things that I find so powerful about that is we’ve long had this vision within the precision health community that we should be able to have a learning hospital system. We should be able to actually learn from the actual real clinical experiences that are happening every day, so that we can stop practicing medicine based off averages.  There’s a lot of work that’s gone on for the last 20 years about how to actually do causal inference. That’s not an AI question. That’s a statistical question. The bottleneck, the reason why we haven’t been able to do that is because most of that information is locked up in unstructured text. And these other tools need essentially a table.  And so now you can decompose this problem, say, well, what if I can use AI not to get to the causal answer, but to just structure the information. So now I can put it into the causal inference tool. And these sorts of patterns I think again become very, not just powerful for a programmer, but they start pulling together different specialties. And I think we’ll really see an acceleration, really, of collaboration across disciplines because of this.  CARLSON: So, when I joined Microsoft Research 18 years ago, I was doing work in computational biology. And I would always have to answer the question: why is Microsoft in biomedicine? And I would always kind of joke saying, well, it is. We sell Office and Windows to every health SALIGRAMA: A lot of healthcare organizations already use Microsoft productivity tools, as you mentioned. So, they asked the developers, build these agents, and use our healthcare orchestrations, to plug in these agents and expose these in these productivity tools. They will get access to all these healthcare workers. So the healthcare agent orchestrator we have today integrates with Microsoft Teams, and it showcases an example of how you can atmention these agents and talk to them like you were talking to another person in a Teams chat. And then it also provides examples of these agents and how they can use these productivity tools. One of the examples we have there is how they can summarize the assessments of this whole chat into a Word Doc, or even convert that into a PowerPoint presentation, for later on. CARLSON: One of the things that has struck me is how easy it is to do. I mean, Will, I don’t know if you’ve worked with folks that have gone from 0 to 60, like, how fast? What does that look like?  GUYMAN: Yeah, it’s funny for us, the technology to transfer all this context into a Word Document or PowerPoint presentation for a doctor to take to a meeting is relatively straightforward compared to the complicated clinical trial matching multimodal processing. The feedback has been tremendous in terms of, wow, that saves so much time to have this organized report that then I can show up to meeting with and the agents can come with me to that meeting because they’re literally having a Teams meeting, often with other human specialists. And the agents can be there and ask and answer questions and fact check and source all the right information on the fly. So, there’s a nice integration into these existing tools.  LUNGREN: We worked with several different centers just to kind of understand, you know, where this might be useful. And, like, as I think we talked about before, the ideas that we’ve come up with again, this is a great one because it’s complex. It’s kind of hairy. There’s a lot of things happening under the hood that don’t necessarily require a medical license to do, right, to prepare for a tumor board and to organize data. But, it’s fascinating, actually. So, you know, folks have come up with ideas of, could I have an agent that can operate an MRI machine, and I can ask the agent to change some parameters or redo a protocol. We thought that was a pretty powerful use case. We’ve had others that have just said, you know, I really want to have a specific agent that’s able to kind of act like deep research does for the consumer side, but based on the context of my patient, so that it can search all the literature and pull the data in the papers that are relevant to this case. And the list goes on and on from operations all the way to clinical, you know, sort of decision making at some level. And I think that the research community that’s going to sprout around this will help us, guide us, I guess, to see what is the most high-impact use cases. Where is this effective? And maybe where it’s not effective. But to me, the part that makes me so, I guess excited about this is just that I don’t have to think about, okay, well, then we have to figure out Health IT. Because it’s always, you know, we always have great ideas and research, and it always feels like there’s such a huge chasm to get it in front of the health care workers that might want to test this out. And it feels like, again, this productivity tool use case again with the enterprise security, the possibility for bringing in third parties to contribute really does feel like it’s a new surface area for innovation. CARLSON: Yeah, I love that. Look. Let me end by putting you all on the spot. So, in three years, multimodal agents will do what? Matt, I’ll start with you.  LUNGREN: I am convinced that it’s going to save massive amount of time before it saves many lives.  RUNDE: I’ll focus on the patient care journey and diagnostic journey. I think it will kind of transform that process for the patient itself and shorten that process.  GUYMAN: Yeah, I think we’ve seen already papers recently showing that different modalities surfaced complementary information. And so we’ll see kind of this AI and these agents becoming an essential companion to the physician, surfacing insights that would have been overlooked otherwise.  SALIGRAMA: And similar to what you guys were saying, agents will become important assistants to healthcare workers, reducing a lot of documentation and workflow, excess work they have to do.  CARLSON: I love that. And I guess for my part, I think really what we’re going to see is a massive unleash of creativity. We’ve had a lot of folks that have been innovating in this space, but they haven’t had a way to actually get it into the hands of early adopters. And I think we’re going to see that really lead to an explosion of creativity across the ecosystem.  LUNGREN: So, where do we get started? Like where are the developers who are listening to this? The folks that are at, you know, labs, research labs and developing health care solutions. Where do they go to get started with the Foundry, the models we’ve talked about, the healthcare agent orchestrator. Where do they go? GUYMAN: So AI.azure.com is the AI Foundry. It’s a website you can go as a developer. You can sign in with your Azure subscription, get your Azure account, your own VM, all that stuff. And you have an agent catalog, the model catalog. You can start from there. There is documentation and templates that you can then deploy to Teams or other applications.  LUNGREN: And tutorials are coming. Right. We have recordings of tutorials. We’ll have Hackathons, some sessions and then more to come. Yeah, we’re really excited.   LUNGREN: Thank you so much, guys for joining us.  CARLSON: Yes. Yeah. Thanks.  SALIGRAMA: Thanks for having us.   #collaborators #healthcare #innovation #impact
    WWW.MICROSOFT.COM
    Collaborators: Healthcare Innovation to Impact
    JONATHAN CARLSON: From the beginning, healthcare stood out to us as an important opportunity for general reasoners to improve the lives and experiences of patients and providers. Indeed, in the past two years, there’s been an explosion of scientific papers looking at the application first of text reasoners and medicine, then multi-modal reasoners that can interpret medical images, and now, most recently, healthcare agents that can reason with each other. But even more impressive than the pace of research has been the surprisingly rapid diffusion of this technology into real world clinical workflows.  LUNGREN: So today, we’ll talk about how our cross-company collaboration has shortened that gap and delivered advanced AI capabilities and solutions into the hands of developers and clinicians around the world, empowering everyone in health and life sciences to achieve more. I’m Doctor Matt Lungren, chief scientific officer for Microsoft Health and Life Sciences.  CARLSON: And I’m Jonathan Carlson, vice president and managing director of Microsoft Health Futures.  LUNGREN: And together we brought some key players leading in the space of AI and health CARLSON: We’ve asked these brilliant folks to join us because each of them represents a mission critical group of cutting-edge stakeholders, scaling breakthroughs into purpose-built solutions and capabilities for health LUNGREN: We’ll hear today how generative AI capabilities can unlock reasoning across every data type in medicine: text, images, waveforms, genomics. And further, how multi-agent frameworks in healthcare can accelerate complex workflows, in some cases acting as a specialist team member, safely secured inside the Microsoft 365 tools used by hundreds of millions of healthcare enterprise users across the world. The opportunity to save time today and lives tomorrow with AI has never been larger. [MUSIC FADES]  MATTHEW LUNGREN: Jonathan. You know, it’s been really interesting kind of observing Microsoft Research over the decades. I’ve, you know, been watching you guys in my prior academic career. You are always on the front of innovation, particularly in health  JONATHAN CARLSON: I mean, it’s some of what’s in our DNA, I mean, we’ve been publishing in health and life sciences for two decades here. But when we launched Health Futures as a mission-focused lab about 7 or 8 years ago, we really started with the premise that the way to have impact was to really close the loop between, not just good ideas that get published, but good ideas that can actually be grounded in real problems that clinicians and scientists care about, that then allow us to actually go from that first proof of concept into an incubation, into getting real world feedback that allows us to close that loop. And now with, you know, the HLS organization here as a product group, we have the opportunity to work really closely with you all to not just prove what’s possible in the clinic or in the lab, but actually start scaling that into the broader community.  CAMERON RUNDE: And one thing I’ll add here is that the problems that we’re trying to tackle in health CARLSON: So, Matt, back to you. What are you guys doing in the product group? How do you guys see these models getting into the clinic? LUNGREN: You know, I think a lot of people, you know, think about AI is just, you know, maybe just even a few years old because of GPT and how that really captured the public’s consciousness. Right? And so, you think about the speech-to-text technology of being able to dictate something, for a clinic note or for a visit, that was typically based on Nuance technology. And so there’s a lot of product understanding of the market, how to deliver something that clinicians will use, understanding the pain points and workflows and really that Health IT space, which is sometimes the third rail, I feel like with a lot of innovation in healthcare.  But beyond that, I mean, I think now that we have this really powerful engine of Microsoft and the platform capabilities, we’re seeing, innovations on the healthcare side for data storage, data interoperability, with different types of medical data. You have new applications coming online, the ability, of course, to see generative AI now infused into the speech-to-text and, becoming Dragon Copilot, which is something that has been, you know, tremendously, received by the community.  Physicians are able to now just have a conversation with a patient. They turn to their computer and the note is ready for them. There’s no more this, we call it keyboard liberation. I don’t know if you heard that before. And that’s just been tremendous. And there’s so much more coming from that side. And then there’s other parts of the workflow that we also get engaged in — the diagnostic workflow. So medical imaging, sharing images across different hospital systems, the list goes on. And so now when you move into AI, we feel like there’s a huge opportunity to deliver capabilities into the clinical workflow via the products and solutions we already have. But, I mean, we’ll now that we’ve kind of expanded our team to involve Azure and platform, we’re really able to now focus on the developers. WILL GUYMAN: Yeah. And you’re always telling me as a doctor how frustrating it is to be spending time at the computer instead of with your patients. I think you told me, you know, 4,000 clicks a day for the typical doctor, which is tremendous. And something like Dragon Copilot can save that five minutes per patient. But it can also now take actions after the patient encounter so it can draft the after-visit summary.  It can order labs and medications for the referral. And that’s incredible. And we want to keep building on that. There’s so many other use cases across the ecosystem. And so that’s why in Azure AI Foundry, we have translated a lot of the research from Microsoft Research and made that available to developers to build and customize for their own applications.  SMITHA SALIGRAMA: Yeah. And as you were saying, in our transformation of moving from solutions to platforms and as, scaling solutions to other, multiple scenarios, as we put our models in AI Foundry, we provide these developer capabilities like bring your own data and fine LUNGREN: Well, I want to do a reality check because, you know, I think to us that are now really focused on technology, it seems like, I’ve heard this story before, right. I, I remember even in, my academic clinical days where it felt like technology was always the quick answer and it felt like technology was, there was maybe a disconnect between what my problems were or what I think needed to be done versus kind of the solutions that were kind of, created or offered to us. And I guess at some level, how Jonathan, do you think about this? Because to do things well in the science space is one thing, to do things well in science, but then also have it be something that actually drives health CARLSON: Yeah. I mean, as you said, I think one of the core pathologies of Big Tech is we assume every problem is a technology problem. And that’s all it will take to solve the problem. And I think, look, I was trained as a computational biologist, and that sits in the awkward middle between biology and computation. And the thing that we always have to remember, the thing that we were very acutely aware of when we set out, was that we are not the experts. We do have, you know, you as an M.D., we have everybody on the team, we have biologists on the team.  But this is a big space. And the only way we’re going to have real impact, the only way we’re even going to pick the right problems to work on is if we really partner deeply, with providers, with EHR (electronic health records) vendors, with scientists, and really understand what’s important and again, get that feedback loop.  RUNDE: Yeah, I think we really need to ground the work that we do in the science itself. You need to understand the broader ecosystem and the broader landscape, across healthwe think are important. Because, as Jonathan said, we’re not the experts in health CARLSON: When we really launched this, this mission, 7 or 8 years ago, we really came in with the premise of, if we decide to stop, we want to be sure the world cares. And the only way that’s going to be true is if we’re really deeply embedded with the people that matter–the patients, the providers and the scientists. LUNGREN: And now it really feels like this collaborative effort, you know, really can help start to extend that mission. Right. I think, you know, Will and Smitha, that we definitely feel the passion and the innovation. And we certainly benefit from those collaborations, too. But then we have these other partners and even customers, right, that we can start to tap into and have that flywheel keep spinning.  GUYMAN: Yeah. And the whole industry is an ecosystem. So, we have our own data sets at Microsoft Research that you’ve trained amazing AI models with. And those are in the catalog. But then you’ve also partnered with institutions like Providence or Page AI . And those models are in the catalog with their data. And then there are third parties like Nvidia that have their own specialized proprietary data sets, and their models are there too. So, we have this ecosystem of open source models. And maybe Smitha, you want to talk about how developers can actually customize these.  SALIGRAMA: Yeah. So we use the Azure AI Foundry ecosystem. Developers can feel at home if they’re using the AI Foundry. So they can look at our model cards that we publish as part of the models we publish, understand the use cases of these models, how to, quickly, bring up these APIs and, look at different use cases of how to apply these and even fine LUNGREN: Yeah it has been interesting to see we have these health GUYMAN: Well, the general-purpose large language models are amazing for medical general reasoning. So Microsoft Research has shown that that they can perform super well on, for example, like the United States medical licensing exam, they can exceed doctor performance if they’re just picking between different multiple-choice questions. But real medicine we know is messier. It doesn’t always start with the whole patient context provided as text in the prompt. You have to get the source data and that raw data is often non-text. The majority of it is non-text. It’s things like medical imaging, radiology, pathology, ophthalmology, dermatology. It goes on and on. And there’s endless signal data, lab data. And so all of this diverse data type needs to be processed through specialized models because much of that data is not available on the public internet.  And that’s why we’re taking this partner approach, first party and third party models that can interpret all this kind of data and then connect them ultimately back to these general reasoners to reason over that.  LUNGREN: So, you know, I’ve been at this company for a while and, you know, familiar with kind of how long it takes, generally to get, you know, a really good research paper, do all the studies, do all the data analysis, and then go through the process of publishing, right, which takes, as, you know, a long time and it’s, you know, very rigorous.  And one of the things that struck me, last year, I think we, we started this big collaboration and, within a quarter, you had a Nature paper coming out from Microsoft Research, and that model that the Nature paper was describing was ready to be used by anyone on the Azure AI Foundry within that same quarter. It kind of blew my mind when I thought about it, you know, even though we were all, you know, working very hard to get that done. Any thoughts on that? I mean, has this ever happened in your career? And, you know, what’s the secret sauce to that?  CARLSON: Yeah, I mean, the time scale from research to product has been massively compressed. And I’d push that even further, which is to say, the reason why it took a quarter was because we were laying the railroad tracks as we’re driving the train. We have examples right after that when we are launching on Foundry the same day we were publishing the paper.  And frankly, the review times are becoming longer than it takes to actually productize the models. I think there’s two things that are going on with that are really converging. One is that the overall ecosystem is converging on a relatively small number of patterns, and that gives us, as a tech company, a reason to go off and really make those patterns hardened in a way that allows not just us, but third parties as well, to really have a nice workflow to publish these models.  But the other is actually, I think, a change in how we work, you know, and for most of our history as an industrial research lab, we would do research and then we’d go pitch it to somebody and try and throw it over the fence. We’ve really built a much more integrated team. In fact, if you look at that Nature paper or any of the other papers, there’s folks from product teams. Many of you are on the papers along with our clinical collaborators. RUNDE: Yeah. I think one thing that’s really important to note is that there’s a ton of different ways that you can have impact, right? So I like to think about phasing. In Health Futures at least, I like to think about phasing the work that we do. So first we have research, which is really early innovation. And the impact there is getting our technology and our tools out there and really sharing the learnings that we’ve had.  So that can be through publications like you mentioned. It can be through open-sourcing our models. And then you go to incubation. So, this is, I think, one of the more new spaces that we’re getting into, which is maybe that blurred line between research and product. Right. Which is, how do we take the tools and technologies that we’ve built and get them into the hands of users, typically through our partnerships?  Right. So, we partner very deeply and collaborate very deeply across the industry. And incubation is really important because we get that early feedback. We get an ability to pivot if we need to. And we also get the ability to see what types of impact our technology is having in the real world. And then lastly, when you think about scale, there’s tons of different ways that you can scale. We can scale third-party through our collaborators and really empower them to go to market to commercialize the things that we’ve built together.  You can also think about scaling internally, which is why I’m so thankful that we’ve created this flywheel between research and product, and a lot of the models that we’ve built that have gone through research, have gone through incubation, have been able to scale on the Azure AI Foundry. But that’s not really our expertise. Right? The scale piece in research, that’s research and incubation. Smitha, how do you think about scaling?  SALIGRAMA: So, there are several angles to scaling the models, the state-of-the-art models we see from the research team. The first angle is, the open sourcing, to get developer trust, and very generous commercial licenses so that they can use it and for their own, use cases. The second is, we also allow them to customize these models, fine GUYMAN: And as one example, you know, University of Wisconsin Health, you know, which Matt knows well. They took one of our models, which is highly versatile. They customized it in Foundry and they optimized it to reliably identify abnormal chest X-rays, the most common imaging procedure, so they could improve their turnaround time triage quickly. And that’s just one example. But we have other partners like Sectra who are doing more of operations use cases automatically routing imaging to the radiologists, setting them up to be efficient. And then Page AI is doing, you know, biomarker identification for actually diagnostics and new drug discovery. So, there’s so many use cases that we have partners already who are building and customizing. LUNGREN: The part that’s striking to me is just that, you know, we could all sit in a room and think about all the different ways someone might use these models on the catalog. And I’m still shocked at the stuff that people use them for and how effective they are. And I think part of that is, you know, again, we talk a lot about generative AI and healthcare and all the things you can do. Again, you know, in text, you refer to that earlier and certainly off the shelf, there’s really powerful applications. But there is, you know, kind of this tip of the iceberg effect where under the water, most of the data that we use to take care of our patients is not text. Right. It’s all the different other modalities. And I think that this has been an unlock right, sort of taking these innovations, innovations from the community, putting them in this ecosystem kind of catalog, essentially. Right. And then allowing folks to kind of, you know, build and develop applications with all these different types of data. Again, I’ve been surprised at what I’m seeing.  CARLSON: This has been just one of the most profound shifts that’s happened in the last 12 months, really. I mean, two years ago we had general models in text that really shifted how we think about, I mean, natural language processing got totally upended by that. Turns out the same technology works for images as well. It doesn’t only allow you to automatically extract concepts from images, but allows you to align those image concepts with text concepts, which means that you can have a conversation with that image. And once you’re in that world now, you are a place where you can start stitching together these multimodal models that really change how you can interact with the data, and how you can start getting more information out of the raw primary data that is part of the patient journey. LUNGREN: Well, and we’re going to get to that because I think you just touched on something. And I want to re-emphasize stitching these things together. There’s a lot of different ways to potentially do that. Right? There’s ways that you can literally train the model end to end with adapters and all kinds of other early fusion fusions. All kinds of ways. But one of the things that the word of the I guess the year is going to be agents and an agent is a very interesting term to think about how you might abstract away some of the components or the tasks that you want the model to, to accomplish in the midst of sort of a real human to maybe model interaction. Can you talk a little bit more about, how we’re thinking about agents in this, in this platform approach?  GUYMAN: Well, this is our newest addition to the Azure AI Foundry. So there’s an agent catalog now where we have a set of pre-configured agents for health care. And then we also have a multi-agent orchestrator that can jump LUNGREN: And, and I really like that concept because, you know, as, as a, as a from the user personas, I think about myself as a user. How am I going to interact with these agents? Where does it naturally fit? And I and I sort of, you know, I’ve seen some of the demonstrations and some of the work that’s going on with Stanford in particular, showing that, you know, and literally in a Teams chat, I can have my clinician colleagues and I can have specialized health It is a completely mind-blowing thing for me. And it’s a light bulb moment for me to I wonder, what have we, what have we heard from folks that have, you know, tried out this health care agent orchestrator in this kind of deployment environment via Teams? GUYMAN: Well, someone joked, you know, are you sure you’re not using Teams because you work at Microsoft? [LAUGHS] But, then we actually were meeting with one of the, radiologists at one of our partners, and they said that that morning they had just done a Teams meeting, or they had met with other specialists to talk about a patient’s cancer case, or they were coming up with a treatment plan.  And that was the light bulb moment for us. We realized, actually, Teams is already being used by physicians as an internal communication tool, as a tool to get work done. And especially since the pandemic, a lot of the meetings moved to virtual and telemedicine. And so it’s a great distribution channel for AI, which is often been a struggle for AI to actually get in the hands of clinicians. And so now we’re allowing developers to build and then deploy very easily and extend it into their own workflows.  CARLSON: I think that’s such an important point. I mean, if you think about one of the really important concepts in computer science is an application programing interface, like some set of rules that allow two applications to talk to each other. One of the big pushes, really important pushes, in medicine has been standards that allow us to actually have data standards and APIs that allow these to talk to each other, and yet still we end up with these silos. There’s silos of data. There’s silos of applications. And just like when you and I work on our phone, we have to go back and forth between applications. One of the things that I think agents do is that it takes the idea that now you can use language to understand intent and effectively program an interface, and it creates a whole new abstraction layer that allows us to simplify the interaction between not just humans and the endpoint, but also for developers.  It allows us to have this abstraction layer that lets different developers focus on different types of models, and yet stitch them all together in a very, very natural, way, not just for the users, but for the ability to actually deploy those models.  SALIGRAMA: Just to add to what Jonathan was mentioning, the other cool thing about the Microsoft Teams user interface is it’s also enterprise ready. RUNDE: And one important thing that we’re thinking about, is exactly this from the very early research through incubation and then to scale, obviously. Right. And so early on in research, we are actively working with our partners and our collaborators to make sure that we have the right data privacy and consent in place. We’re doing this in incubation as well. And then obviously in scale. Yep.  LUNGREN: So, I think AI has always been thought of as a savior kind of technology. We talked a little bit about how there’s been some ups and downs in terms of the ability for technology to be effective in health care. At the same time, we’re seeing a lot of new innovations that are really making a difference. But then we kind of get, you know, we talked about agents a little bit. It feels like we’re maybe abstracting too far. Maybe it’s things are going too fast, almost. What makes this different? I mean, in your mind is this truly a logical next step or is it going to take some time?  CARLSON: I think there’s a couple things that have happened. I think first, on just a pure technology. What led to ChatGPT? And I like to think of really three major breakthroughs. The first was new mathematical concepts of attention, which really means that we now have a way that a machine can figure out which parts of the context it should actually focus on, just the way our brains do. Right? I mean, if you’re a clinician and somebody is talking to you, the majority of that conversation is not relevant for the diagnosis. But, you know how to zoom in on the parts that matter. That’s a super powerful mathematical concept. The second one is this idea of self-supervision. So, I think one of the fundamental problems of machine learning has been that you have to train on labeled training data and labels are expensive, which means data sets are small, which means the final models are very narrow and brittle. And the idea of self-supervision is that you can just get a model to automatically learn concepts, and the language is just predict the next word. And what’s important about that is that leads to models that can actually manipulate and understand really messy text and pull out what’s important about that, and then and then stitch that back together in interesting ways. And the third concept, that came out of those first two, was just the observational scale. And that’s that more is better, more data, more compute, bigger models. And that really leads to a reason to keep investing. And for these models to keep getting better. So that as a as a groundwork, that’s what led to ChatGPT. That’s what led to our ability now to not just have rule-based systems or simple machine learning based systems to take a messy EHR record, say, and pull out a couple concepts. But to really feed the whole thing in and say, okay, I need you to figure out which concepts are in here. And is this particular attribute there, for example. That’s now led to the next breakthrough, which is all those core ideas apply to images as well. They apply to proteins, to DNA. And so we’re starting to see models that understand images and the concepts of images, and can actually map those back to text as well.  So, you can look at a pathology image and say, not just at the cell, but it appears that there’s some certain sort of cancer in this particular, tissue there. And then you take those two things together and you layer on the fact that now you have a model, or a set of models, that can understand intent, can understand human concepts and biomedical concepts, and you can start stitching them together into specialized agents that can actually reason with each other, which at some level gives you an API as a developer to say, okay, I need to focus on a pathology model and get this really, really, sound while somebody else is focusing on a radiology model, but now allows us to stitch these all together with a user interface that we can now talk to through natural language.  RUNDE: I’d like to double click a little bit on that medical abstraction piece that you mentioned. Just the amount of data, clinical data that there is for each individual patient. Let’s think about cancer patients for a second to make this real. Right. For every cancer patient, it could take a couple of hours to structure their information. And why is that important? Because, you have to get that information in a structured way and abstract relevant information to be able to unlock precision health applications right, for each patient. So, to be able to match them to a trial, right, someone has to sit there and go through all of the clinical notes from their entire patient care journey, from the beginning to the end. And that’s not scalable. And so one thing that we’ve been doing in an active project that we’ve been working on with a handful of our partners, but Providence specifically, I’ll call out, is using AI to actually abstract and curate that information. So that gives time back to the health care provider to spend with patients, instead of spending all their time curating this information.  And this is super important because it sets the scene and the backbone for all those precision health applications. Like I mentioned, clinical trial matching, tumor boards is another really important example here. Maybe Matt, you can talk to that a little bit. LUNGREN: It’s a great example. And you know it’s so funny. We’ve talked about this use case and the you know the health And a tumor board is a critical meeting that happens at many cancer centers where specialists all get together, come with their perspective, and make a comment on what would be the best next step in treatment. But the background in preparing for that is you know, again, organizing the data. But to your point, also, what are the clinical trials that are active? There are thousands of clinical trials. There’s hundreds every day added. How can anyone keep up with that? And these are the kinds of use cases that start to bubble up. And you realize that a technology that understands concepts, context and can reason over vast amounts of data with a language interface-that is a powerful tool. Even before we get to some of the, you know, unlocking new insights and even precision medicine, this is that idea of saving time before lives to me. And there’s an enormous amount of undifferentiated heavy lifting that happens in health GUYMAN: And we’ve packaged these agents, the manual abstraction work that, you know, manually takes hours. Now we have an agent. It’s in Foundry along with the clinical trial matching agent, which I think at Providence you showed could double the match rate over the baseline that they were using by using the AI for multiple data sources. So, we have that and then we have this orchestration that is using this really neat technology from Microsoft Research. Semantic Kernel, Magentic There’s turn taking, there’s negotiation between the agents. So, there’s this really interesting system that’s emerging. And again, this is all possible to be used through Teams. And there’s some great extensibility as well. We’ve been talking about that and working on some cool tools.  SALIGRAMA: Yeah. Yeah. No, I think if I have to geek out a little bit on how all this agent tech orchestrations are coming up, like I’ve been in software engineering for decades, it’s kind of a next version of distributed systems where you have these services that talk to each other. It’s a more natural way because LLMs are giving these natural ways instead of a structured API ways of conversing. We have these agents which can naturally understand how to talk to each other. Right. So this is like the next evolution of our systems now. And the way we’re packaging all of this is multiple ways based on all the standards and innovation that’s happening in this space. So, first of all, we are building these agents that are very good at specific tasks, like, Will was saying like, a trial matching agent or patient timeline agents.  So, we take all of these, and then we package it in a workflow and an orchestration. We use the standard, some of these coming from research. The Semantic Kernel, the Magentic-One. And then, all of these also allow us to extend these agents with custom agents that can be plugged in. So, we are open sourcing the entire agent orchestration in AI Foundry templates, so that developers can extend their own agents, and make their own workflows out of it. So, a lot of cool innovation happening to apply this technology to specific scenarios and workflows.  LUNGREN: Well, I was going to ask you, like, so as part of that extension. So, like, you know, folks can say, hey, I have maybe a really specific part of my workflow that I want to use some agents for, maybe one of the agents that can do PubMed literature search, for example. But then there’s also agents that, come in from the outside, you know, sort of like I could, I can imagine a software company or AI company that has a built-in agent that plugs in as well.  SALIGRAMA: Yeah. Yeah, absolutely. So, you can bring your own agent. And then we have these, standard ways of communicating with agents and integrating with the orchestration language so you can bring your own agent and extend this health care agent, agent orchestrator to your own needs.  LUNGREN: I can just think of, like, in a group chat, like a bunch of different specialist agents. And I really would want an orchestrator to help find the right tool, to your point earlier, because I’m guessing this ecosystem is going to expand quickly. Yeah. And I may not know which tool is best for which question. I just want to ask the question. Right.  SALIGRAMA: Yeah. Yeah.  CARLSON: Well, I think to that point to I mean, you said an important point here, which is tools, and these are not necessarily just AI tools. Right? I mean, we’ve known this for a while, right? LLMS are not very good at math, but you can have it use a calculator and then it works very well. And you know you guys both brought up the universal medical abstraction a couple times.  And one of the things that I find so powerful about that is we’ve long had this vision within the precision health community that we should be able to have a learning hospital system. We should be able to actually learn from the actual real clinical experiences that are happening every day, so that we can stop practicing medicine based off averages.  There’s a lot of work that’s gone on for the last 20 years about how to actually do causal inference. That’s not an AI question. That’s a statistical question. The bottleneck, the reason why we haven’t been able to do that is because most of that information is locked up in unstructured text. And these other tools need essentially a table.  And so now you can decompose this problem, say, well, what if I can use AI not to get to the causal answer, but to just structure the information. So now I can put it into the causal inference tool. And these sorts of patterns I think again become very, not just powerful for a programmer, but they start pulling together different specialties. And I think we’ll really see an acceleration, really, of collaboration across disciplines because of this.  CARLSON: So, when I joined Microsoft Research 18 years ago, I was doing work in computational biology. And I would always have to answer the question: why is Microsoft in biomedicine? And I would always kind of joke saying, well, it is. We sell Office and Windows to every health SALIGRAMA: A lot of healthcare organizations already use Microsoft productivity tools, as you mentioned. So, they asked the developers, build these agents, and use our healthcare orchestrations, to plug in these agents and expose these in these productivity tools. They will get access to all these healthcare workers. So the healthcare agent orchestrator we have today integrates with Microsoft Teams, and it showcases an example of how you can at (@) mention these agents and talk to them like you were talking to another person in a Teams chat. And then it also provides examples of these agents and how they can use these productivity tools. One of the examples we have there is how they can summarize the assessments of this whole chat into a Word Doc, or even convert that into a PowerPoint presentation, for later on. CARLSON: One of the things that has struck me is how easy it is to do. I mean, Will, I don’t know if you’ve worked with folks that have gone from 0 to 60, like, how fast? What does that look like?  GUYMAN: Yeah, it’s funny for us, the technology to transfer all this context into a Word Document or PowerPoint presentation for a doctor to take to a meeting is relatively straightforward compared to the complicated clinical trial matching multimodal processing. The feedback has been tremendous in terms of, wow, that saves so much time to have this organized report that then I can show up to meeting with and the agents can come with me to that meeting because they’re literally having a Teams meeting, often with other human specialists. And the agents can be there and ask and answer questions and fact check and source all the right information on the fly. So, there’s a nice integration into these existing tools.  LUNGREN: We worked with several different centers just to kind of understand, you know, where this might be useful. And, like, as I think we talked about before, the ideas that we’ve come up with again, this is a great one because it’s complex. It’s kind of hairy. There’s a lot of things happening under the hood that don’t necessarily require a medical license to do, right, to prepare for a tumor board and to organize data. But, it’s fascinating, actually. So, you know, folks have come up with ideas of, could I have an agent that can operate an MRI machine, and I can ask the agent to change some parameters or redo a protocol. We thought that was a pretty powerful use case. We’ve had others that have just said, you know, I really want to have a specific agent that’s able to kind of act like deep research does for the consumer side, but based on the context of my patient, so that it can search all the literature and pull the data in the papers that are relevant to this case. And the list goes on and on from operations all the way to clinical, you know, sort of decision making at some level. And I think that the research community that’s going to sprout around this will help us, guide us, I guess, to see what is the most high-impact use cases. Where is this effective? And maybe where it’s not effective. But to me, the part that makes me so, I guess excited about this is just that I don’t have to think about, okay, well, then we have to figure out Health IT. Because it’s always, you know, we always have great ideas and research, and it always feels like there’s such a huge chasm to get it in front of the health care workers that might want to test this out. And it feels like, again, this productivity tool use case again with the enterprise security, the possibility for bringing in third parties to contribute really does feel like it’s a new surface area for innovation. CARLSON: Yeah, I love that. Look. Let me end by putting you all on the spot. So, in three years, multimodal agents will do what? Matt, I’ll start with you.  LUNGREN: I am convinced that it’s going to save massive amount of time before it saves many lives.  RUNDE: I’ll focus on the patient care journey and diagnostic journey. I think it will kind of transform that process for the patient itself and shorten that process.  GUYMAN: Yeah, I think we’ve seen already papers recently showing that different modalities surfaced complementary information. And so we’ll see kind of this AI and these agents becoming an essential companion to the physician, surfacing insights that would have been overlooked otherwise.  SALIGRAMA: And similar to what you guys were saying, agents will become important assistants to healthcare workers, reducing a lot of documentation and workflow, excess work they have to do.  CARLSON: I love that. And I guess for my part, I think really what we’re going to see is a massive unleash of creativity. We’ve had a lot of folks that have been innovating in this space, but they haven’t had a way to actually get it into the hands of early adopters. And I think we’re going to see that really lead to an explosion of creativity across the ecosystem.  LUNGREN: So, where do we get started? Like where are the developers who are listening to this? The folks that are at, you know, labs, research labs and developing health care solutions. Where do they go to get started with the Foundry, the models we’ve talked about, the healthcare agent orchestrator. Where do they go? GUYMAN: So AI.azure.com is the AI Foundry. It’s a website you can go as a developer. You can sign in with your Azure subscription, get your Azure account, your own VM, all that stuff. And you have an agent catalog, the model catalog. You can start from there. There is documentation and templates that you can then deploy to Teams or other applications.  LUNGREN: And tutorials are coming. Right. We have recordings of tutorials. We’ll have Hackathons, some sessions and then more to come. Yeah, we’re really excited.  [MUSIC]  LUNGREN: Thank you so much, guys for joining us.  CARLSON: Yes. Yeah. Thanks.  SALIGRAMA: Thanks for having us.  [MUSIC FADES] 
    0 Comentários 0 Compartilhamentos 0 Anterior
  • That’s One Smart Hospital! Taiwan Medical Centers Deploy Life-Saving Innovations With NVIDIA System-Builder Partners

    Leading healthcare organizations across the globe are using agentic AI, robotics and digital twins of medical environments to enhance surgical precision, boost workflow efficiency, improve medical diagnoses and more.
    Physical AI and humanoid robots in hospitals have the potential to automate routine tasks, assist with patient care and address workforce shortages.
    This is especially crucial in places where challenges to optimal healthcare services are paramount. Such challenges include hospital overcrowding, an aging population, rising healthcare costs and a shortage of medical professionals, all of which are affecting Taiwan, as well as many other regions and countries.
    At the COMPUTEX trade show in Taipei, NVIDIA today showcased how leading Taiwan medical centers are collaborating with top system builders to integrate smart hospital technologies and other AI-powered healthcare solutions that can help reduce these issues and save millions of lives.
    Cathay General Hospital, Chang Gung Memorial Hospital, National Taiwan University Hospitaland Taichung Veterans General Hospitalare among the top centers in the region pioneering healthcare AI innovation.
    Deployed in collaboration with leading system builders such as Advantech, Onyx, Foxconn and YUAN, these solutions tap into NVIDIA’s agentic AI and robotics technologies, including the NVIDIA Holoscan and IGX platforms, NVIDIA Jetson for embedded computing and NVIDIA Omniverse for simulating virtual worlds with OpenUSD.
    CGMH Boosts AI-Powered Medical Imaging
    With an average of 8.2 million outpatient visits and 2.4 million hospitalizations a year,  CGMH estimates that a third of the Taiwanese population has sought treatment at its vast network of hospitals in Taipei and seven other cities.
    The organization is pioneering smart hospital innovation by enhancing surgical precision and workflow efficiency through advanced, AI-powered colonoscopy workflow solutions, developed in collaboration with Advantech and based on the NVIDIA Holoscan platform, which includes the Holoscan SDK and the Holoscan Sensor Bridge running on NVIDIA IGX.
    NVIDIA Holoscan is a real-time sensor processing platform for edge AI compute, while NVIDIA IGX offers enterprise-ready, industrial edge AI purpose-built for medical environments.
    Using these platforms, CGMH is accelerating AI integration in its colonoscopy diagnostics procedures. Deployed in gastrointestinal consultation rooms, the AI-powered tool collects colonoscopy streams to train a customized model built on Holoscan and provides real-time colonic polyps identification and classification.
    Colonoscopy tools at CGMH. Image courtesy of CGMH.
    CGMH’s AI infrastructure — comprising NVIDIA accelerated computing, NVIDIA DGX systems, the MONAI framework, NVIDIA TensorRT-LLM open-source library, NVIDIA Dynamo inference framework, and the NVIDIA NeMo and Clara platforms — enables accelerated research and development across the organization.
    CGMH serves nearly 50 AI agent models that daily help the hospital analyze medical imaging, improving diagnostic accuracy, throughput and real-time inference at scale. For example, NVIDIA Triton-powered AI sped newborn examination record processing by 10x.
    Cathay General Hospital Improves Diagnostics With AI
    Cathay General Hospital, a Taipei-based healthcare center that provides hospital management and medical services, has collaborated with National Taiwan University Hospital, medical computer manufacturer Onyx and software provider aetherAI to develop an AI-assisted colonoscopy system that highlights lesions, detects hard-to-spot polyps and issues alerts to help physicians with diagnoses.
    Polyp detection during colonoscopy. Image courtesy of aetherAI and Onyx.
    Powered by a compact, plug-and-play AI BOX device — built with the NVIDIA Jetson AGX Xavier module — the AI system is trained on over 400,000 high-quality, physician-annotated images collected from patients with diverse and severe lesions over four years.
    The system can achieve up to 95.8% accuracy and sensitivity, and studies have shown that it can improve adenoma detection rates by up to 30%. These enhancements assist physicians in reducing diagnostic errors and making more informed treatment decisions, ultimately contributing to improved patient outcomes.
    NTUH Detects Liver Tumors, Cardiovascular Risks With AI
    In the 100+ years since its founding, NTUH has nurtured countless professionals in medicine and is renowned for its trusted clinical care. The national teaching hospital is now adopting AI imaging to more quickly, accurately diagnose patients.
    NTUH’s HeaortaNet model, trained on more than 70,000 axial images from 200 patients, automates CT scan segmentation of the heart, including the aorta and other arteries, in 3D, enabling rapid analysis of risks for cardiovascular disease. The model, which achieves high segmentation accuracy for the pericardium and aorta, significantly reduced data processing time per case from an hour to about 0.4 seconds.
    In addition, NTUH collaborated with the Good Liver Foundation and system builder YUAN to develop a diagnostic-assistance system for liver cancer detection during ultrasounds. It taps into an NVIDIA Jetson Orin NX module and a deep learning model trained on more than 5,000 annotated ultrasound images to identify malignant and benign liver tumors in real time.
    YUAN and NTUH’s liver cancer detection system turns an ultrasound device into an AI-assisted diagnostic tool. Image courtesy of YUAN.
    NVIDIA DeepStream and TensorRT SDKs accelerate the system’s deep learning model, ultimately helping clinicians detect tumors earlier and more reliably. In addition, NTUH is using NVIDIA DGX to train AI models for its system that detects pancreatic cancer from CT scans.
    TCVGH Streamlines Multimodal Imaging and Clinical Documentation Workflows With AI 
    Taichung Veterans General Hospital, a medical center and a teaching hospital administered by the Veterans Affairs Council in Taipei, has partnered with Foxconn to build physical and digital robots to augment staffing, improving clinician productivity and patient experiences.
    Foxconn developed an AI system that can analyze medical images and spot signs of breast cancer earlier than traditional methods, using NVIDIA Hopper GPUs, NVIDIA DGX systems and the MONAI framework. By tapping into clinical data and multimodal AI imaging, the system creates 3D virtual breast models, quickly highlighting areas of concern in scans to help radiologists make faster, more confident decisions.
    Foxconn is also working with TCVGH to build smart hospital solutions like the AI nursing collaborative robot Nurabot and tapping into NVIDIA Omniverse to create real-time digital twins of hospital environments, including nursing stations, patient wards and corridors. These digital replicas serve as high-fidelity simulations where Jetson-powered service robots can be trained to autonomously deliver medical supplies throughout the hospital, ultimately improving care efficiency.
    AI nursing collaborative robot Nurabot. Image courtesy of Foxconn.
    In addition, TCVGH has developed and deployed its Co-Healer system, which integrates the Taiwanese native large language model TAIDE-LX-7B to streamline clinical documentation processes with agentic AI.
    Co-Healer, built on the NVIDIA Jetson Xavier NX module, processes and helps summarize medical documents — such as nursing progress notes and health education materials — and supports medical exam preparation by providing students with instant access to nursing guidelines and patient-specific protocols for clinical procedures and diagnostic tests. This helps healthcare workers alleviate burnout while giving patients a clearer understanding of their diagnoses.
    Learn more about the latest AI advancements in healthcare at NVIDIA GTC Taipei, running May 21-22 at COMPUTEX.
    #thats #one #smart #hospital #taiwan
    That’s One Smart Hospital! Taiwan Medical Centers Deploy Life-Saving Innovations With NVIDIA System-Builder Partners
    Leading healthcare organizations across the globe are using agentic AI, robotics and digital twins of medical environments to enhance surgical precision, boost workflow efficiency, improve medical diagnoses and more. Physical AI and humanoid robots in hospitals have the potential to automate routine tasks, assist with patient care and address workforce shortages. This is especially crucial in places where challenges to optimal healthcare services are paramount. Such challenges include hospital overcrowding, an aging population, rising healthcare costs and a shortage of medical professionals, all of which are affecting Taiwan, as well as many other regions and countries. At the COMPUTEX trade show in Taipei, NVIDIA today showcased how leading Taiwan medical centers are collaborating with top system builders to integrate smart hospital technologies and other AI-powered healthcare solutions that can help reduce these issues and save millions of lives. Cathay General Hospital, Chang Gung Memorial Hospital, National Taiwan University Hospitaland Taichung Veterans General Hospitalare among the top centers in the region pioneering healthcare AI innovation. Deployed in collaboration with leading system builders such as Advantech, Onyx, Foxconn and YUAN, these solutions tap into NVIDIA’s agentic AI and robotics technologies, including the NVIDIA Holoscan and IGX platforms, NVIDIA Jetson for embedded computing and NVIDIA Omniverse for simulating virtual worlds with OpenUSD. CGMH Boosts AI-Powered Medical Imaging With an average of 8.2 million outpatient visits and 2.4 million hospitalizations a year,  CGMH estimates that a third of the Taiwanese population has sought treatment at its vast network of hospitals in Taipei and seven other cities. The organization is pioneering smart hospital innovation by enhancing surgical precision and workflow efficiency through advanced, AI-powered colonoscopy workflow solutions, developed in collaboration with Advantech and based on the NVIDIA Holoscan platform, which includes the Holoscan SDK and the Holoscan Sensor Bridge running on NVIDIA IGX. NVIDIA Holoscan is a real-time sensor processing platform for edge AI compute, while NVIDIA IGX offers enterprise-ready, industrial edge AI purpose-built for medical environments. Using these platforms, CGMH is accelerating AI integration in its colonoscopy diagnostics procedures. Deployed in gastrointestinal consultation rooms, the AI-powered tool collects colonoscopy streams to train a customized model built on Holoscan and provides real-time colonic polyps identification and classification. Colonoscopy tools at CGMH. Image courtesy of CGMH. CGMH’s AI infrastructure — comprising NVIDIA accelerated computing, NVIDIA DGX systems, the MONAI framework, NVIDIA TensorRT-LLM open-source library, NVIDIA Dynamo inference framework, and the NVIDIA NeMo and Clara platforms — enables accelerated research and development across the organization. CGMH serves nearly 50 AI agent models that daily help the hospital analyze medical imaging, improving diagnostic accuracy, throughput and real-time inference at scale. For example, NVIDIA Triton-powered AI sped newborn examination record processing by 10x. Cathay General Hospital Improves Diagnostics With AI Cathay General Hospital, a Taipei-based healthcare center that provides hospital management and medical services, has collaborated with National Taiwan University Hospital, medical computer manufacturer Onyx and software provider aetherAI to develop an AI-assisted colonoscopy system that highlights lesions, detects hard-to-spot polyps and issues alerts to help physicians with diagnoses. Polyp detection during colonoscopy. Image courtesy of aetherAI and Onyx. Powered by a compact, plug-and-play AI BOX device — built with the NVIDIA Jetson AGX Xavier module — the AI system is trained on over 400,000 high-quality, physician-annotated images collected from patients with diverse and severe lesions over four years. The system can achieve up to 95.8% accuracy and sensitivity, and studies have shown that it can improve adenoma detection rates by up to 30%. These enhancements assist physicians in reducing diagnostic errors and making more informed treatment decisions, ultimately contributing to improved patient outcomes. NTUH Detects Liver Tumors, Cardiovascular Risks With AI In the 100+ years since its founding, NTUH has nurtured countless professionals in medicine and is renowned for its trusted clinical care. The national teaching hospital is now adopting AI imaging to more quickly, accurately diagnose patients. NTUH’s HeaortaNet model, trained on more than 70,000 axial images from 200 patients, automates CT scan segmentation of the heart, including the aorta and other arteries, in 3D, enabling rapid analysis of risks for cardiovascular disease. The model, which achieves high segmentation accuracy for the pericardium and aorta, significantly reduced data processing time per case from an hour to about 0.4 seconds. In addition, NTUH collaborated with the Good Liver Foundation and system builder YUAN to develop a diagnostic-assistance system for liver cancer detection during ultrasounds. It taps into an NVIDIA Jetson Orin NX module and a deep learning model trained on more than 5,000 annotated ultrasound images to identify malignant and benign liver tumors in real time. YUAN and NTUH’s liver cancer detection system turns an ultrasound device into an AI-assisted diagnostic tool. Image courtesy of YUAN. NVIDIA DeepStream and TensorRT SDKs accelerate the system’s deep learning model, ultimately helping clinicians detect tumors earlier and more reliably. In addition, NTUH is using NVIDIA DGX to train AI models for its system that detects pancreatic cancer from CT scans. TCVGH Streamlines Multimodal Imaging and Clinical Documentation Workflows With AI  Taichung Veterans General Hospital, a medical center and a teaching hospital administered by the Veterans Affairs Council in Taipei, has partnered with Foxconn to build physical and digital robots to augment staffing, improving clinician productivity and patient experiences. Foxconn developed an AI system that can analyze medical images and spot signs of breast cancer earlier than traditional methods, using NVIDIA Hopper GPUs, NVIDIA DGX systems and the MONAI framework. By tapping into clinical data and multimodal AI imaging, the system creates 3D virtual breast models, quickly highlighting areas of concern in scans to help radiologists make faster, more confident decisions. Foxconn is also working with TCVGH to build smart hospital solutions like the AI nursing collaborative robot Nurabot and tapping into NVIDIA Omniverse to create real-time digital twins of hospital environments, including nursing stations, patient wards and corridors. These digital replicas serve as high-fidelity simulations where Jetson-powered service robots can be trained to autonomously deliver medical supplies throughout the hospital, ultimately improving care efficiency. AI nursing collaborative robot Nurabot. Image courtesy of Foxconn. In addition, TCVGH has developed and deployed its Co-Healer system, which integrates the Taiwanese native large language model TAIDE-LX-7B to streamline clinical documentation processes with agentic AI. Co-Healer, built on the NVIDIA Jetson Xavier NX module, processes and helps summarize medical documents — such as nursing progress notes and health education materials — and supports medical exam preparation by providing students with instant access to nursing guidelines and patient-specific protocols for clinical procedures and diagnostic tests. This helps healthcare workers alleviate burnout while giving patients a clearer understanding of their diagnoses. Learn more about the latest AI advancements in healthcare at NVIDIA GTC Taipei, running May 21-22 at COMPUTEX. #thats #one #smart #hospital #taiwan
    BLOGS.NVIDIA.COM
    That’s One Smart Hospital! Taiwan Medical Centers Deploy Life-Saving Innovations With NVIDIA System-Builder Partners
    Leading healthcare organizations across the globe are using agentic AI, robotics and digital twins of medical environments to enhance surgical precision, boost workflow efficiency, improve medical diagnoses and more. Physical AI and humanoid robots in hospitals have the potential to automate routine tasks, assist with patient care and address workforce shortages. This is especially crucial in places where challenges to optimal healthcare services are paramount. Such challenges include hospital overcrowding, an aging population, rising healthcare costs and a shortage of medical professionals, all of which are affecting Taiwan, as well as many other regions and countries. At the COMPUTEX trade show in Taipei, NVIDIA today showcased how leading Taiwan medical centers are collaborating with top system builders to integrate smart hospital technologies and other AI-powered healthcare solutions that can help reduce these issues and save millions of lives. Cathay General Hospital, Chang Gung Memorial Hospital (CGMH), National Taiwan University Hospital (NTUH) and Taichung Veterans General Hospital (TCVGH) are among the top centers in the region pioneering healthcare AI innovation. Deployed in collaboration with leading system builders such as Advantech, Onyx, Foxconn and YUAN, these solutions tap into NVIDIA’s agentic AI and robotics technologies, including the NVIDIA Holoscan and IGX platforms, NVIDIA Jetson for embedded computing and NVIDIA Omniverse for simulating virtual worlds with OpenUSD. CGMH Boosts AI-Powered Medical Imaging With an average of 8.2 million outpatient visits and 2.4 million hospitalizations a year,  CGMH estimates that a third of the Taiwanese population has sought treatment at its vast network of hospitals in Taipei and seven other cities. The organization is pioneering smart hospital innovation by enhancing surgical precision and workflow efficiency through advanced, AI-powered colonoscopy workflow solutions, developed in collaboration with Advantech and based on the NVIDIA Holoscan platform, which includes the Holoscan SDK and the Holoscan Sensor Bridge running on NVIDIA IGX. NVIDIA Holoscan is a real-time sensor processing platform for edge AI compute, while NVIDIA IGX offers enterprise-ready, industrial edge AI purpose-built for medical environments. Using these platforms, CGMH is accelerating AI integration in its colonoscopy diagnostics procedures. Deployed in gastrointestinal consultation rooms, the AI-powered tool collects colonoscopy streams to train a customized model built on Holoscan and provides real-time colonic polyps identification and classification. Colonoscopy tools at CGMH. Image courtesy of CGMH. CGMH’s AI infrastructure — comprising NVIDIA accelerated computing, NVIDIA DGX systems, the MONAI framework, NVIDIA TensorRT-LLM open-source library, NVIDIA Dynamo inference framework, and the NVIDIA NeMo and Clara platforms — enables accelerated research and development across the organization. CGMH serves nearly 50 AI agent models that daily help the hospital analyze medical imaging, improving diagnostic accuracy, throughput and real-time inference at scale. For example, NVIDIA Triton-powered AI sped newborn examination record processing by 10x. Cathay General Hospital Improves Diagnostics With AI Cathay General Hospital, a Taipei-based healthcare center that provides hospital management and medical services, has collaborated with National Taiwan University Hospital, medical computer manufacturer Onyx and software provider aetherAI to develop an AI-assisted colonoscopy system that highlights lesions, detects hard-to-spot polyps and issues alerts to help physicians with diagnoses. Polyp detection during colonoscopy. Image courtesy of aetherAI and Onyx. Powered by a compact, plug-and-play AI BOX device — built with the NVIDIA Jetson AGX Xavier module — the AI system is trained on over 400,000 high-quality, physician-annotated images collected from patients with diverse and severe lesions over four years. The system can achieve up to 95.8% accuracy and sensitivity, and studies have shown that it can improve adenoma detection rates by up to 30%. These enhancements assist physicians in reducing diagnostic errors and making more informed treatment decisions, ultimately contributing to improved patient outcomes. NTUH Detects Liver Tumors, Cardiovascular Risks With AI In the 100+ years since its founding, NTUH has nurtured countless professionals in medicine and is renowned for its trusted clinical care. The national teaching hospital is now adopting AI imaging to more quickly, accurately diagnose patients. NTUH’s HeaortaNet model, trained on more than 70,000 axial images from 200 patients, automates CT scan segmentation of the heart, including the aorta and other arteries, in 3D, enabling rapid analysis of risks for cardiovascular disease. The model, which achieves high segmentation accuracy for the pericardium and aorta, significantly reduced data processing time per case from an hour to about 0.4 seconds. In addition, NTUH collaborated with the Good Liver Foundation and system builder YUAN to develop a diagnostic-assistance system for liver cancer detection during ultrasounds. It taps into an NVIDIA Jetson Orin NX module and a deep learning model trained on more than 5,000 annotated ultrasound images to identify malignant and benign liver tumors in real time. YUAN and NTUH’s liver cancer detection system turns an ultrasound device into an AI-assisted diagnostic tool. Image courtesy of YUAN. NVIDIA DeepStream and TensorRT SDKs accelerate the system’s deep learning model, ultimately helping clinicians detect tumors earlier and more reliably. In addition, NTUH is using NVIDIA DGX to train AI models for its system that detects pancreatic cancer from CT scans. TCVGH Streamlines Multimodal Imaging and Clinical Documentation Workflows With AI  Taichung Veterans General Hospital (TCVGH), a medical center and a teaching hospital administered by the Veterans Affairs Council in Taipei, has partnered with Foxconn to build physical and digital robots to augment staffing, improving clinician productivity and patient experiences. Foxconn developed an AI system that can analyze medical images and spot signs of breast cancer earlier than traditional methods, using NVIDIA Hopper GPUs, NVIDIA DGX systems and the MONAI framework. By tapping into clinical data and multimodal AI imaging, the system creates 3D virtual breast models, quickly highlighting areas of concern in scans to help radiologists make faster, more confident decisions. Foxconn is also working with TCVGH to build smart hospital solutions like the AI nursing collaborative robot Nurabot and tapping into NVIDIA Omniverse to create real-time digital twins of hospital environments, including nursing stations, patient wards and corridors. These digital replicas serve as high-fidelity simulations where Jetson-powered service robots can be trained to autonomously deliver medical supplies throughout the hospital, ultimately improving care efficiency. AI nursing collaborative robot Nurabot. Image courtesy of Foxconn. In addition, TCVGH has developed and deployed its Co-Healer system, which integrates the Taiwanese native large language model TAIDE-LX-7B to streamline clinical documentation processes with agentic AI. Co-Healer, built on the NVIDIA Jetson Xavier NX module, processes and helps summarize medical documents — such as nursing progress notes and health education materials — and supports medical exam preparation by providing students with instant access to nursing guidelines and patient-specific protocols for clinical procedures and diagnostic tests. This helps healthcare workers alleviate burnout while giving patients a clearer understanding of their diagnoses. Learn more about the latest AI advancements in healthcare at NVIDIA GTC Taipei, running May 21-22 at COMPUTEX.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • How AM Elevates Healthcare: Insights from the Materialise 3D Printing in Hospitals Forum 2025

    The cobbled streets and centuries-old university halls of Leuven recently served as a picturesque backdrop for the Materialise 3D Printing in Hospitals Forum 2025. Belgium’s Flemish Brabant capital hosted the annual meeting, which has become a key gathering for the medical 3D printing community since its launch in 2017.
    This year, 140 international healthcare professionals convened for two days of talks, workshops, and lively discussion on how Materialise’s software enhances patient care. The Forum’s opening day, hosted at Leuven’s historic Irish College, featured 16 presentations by 18 healthcare clinicians and medical 3D printing experts. 
    While often described as the future of medicine, personalized healthcare has already become routine in many clinical settings. Speakers emphasized that 3D printing is no longer merely a “cool” innovation, but an essential tool that improves patient outcomes. “Personalized treatment is not just a vision for the future,” said Koen Peters, Executive Vice President Medical at Materialise. “It’s a reality we’re building together every day.”
    During the forum, practitioners and clinical engineers demonstrated the critical role of Materialise’s software in medical workflows. Presentations highlighted value across a wide range of procedures, from brain tumour removal and organ transplantation to the separation of conjoined twins and maxillofacial implant surgeries. Several use cases demonstrated how 3D technology can reduce surgery times by up to four times, enhance patient recovery, and cut hospital costs by almost £6,000 per case.     
    140 visitors attended the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise.
    Digital simulation and 3D printing slash operating times 
    Headquartered a few miles outside Leuven’s medieval center, Materialise is a global leader in medical 3D printing and digital planning. Its Mimics software suite automatically converts CT and MRI scans into detailed 3D models. Clinicians use these tools to prepare for procedures, analyse anatomy, and create patient-specific models that enhance surgical planning.
    So far, Materialise software has supported more than 500,000 patients and analysed over 6 million medical scans. One case that generated notable interest among the Forum’s attendees was that of Lisa Ferrie and Jiten Parmar from Leeds General Infirmary. The pair worked alongside Asim Sheikh, a Consultant Skullbase and Neurovascular Neurosurgeon, to conduct the UK’s first “coach door osteotomy” on Ruvimbo Kaviya, a 40-year-old nurse from Leeds. 
    This novel keyhole surgery successfully removed a brain tumor from Kaviya’s cavernous sinus, a hard-to-reach area behind the eyes. Most surgeries of this kind require large incisions and the removal of substantial skull sections, resulting in extended recovery time and the risk of postoperative complications. Such an approach would have presented serious risks for removing Kaviya’s tumor, which “was in a complex area surrounded by a lot of nerves,” explained Parmar, a Consultant in Maxillofacial Surgery.   
    Instead, the Leeds-based team uses a minimally invasive technique that requires only a 1.5 cm incision near the side of Ravimbo’s eyelid. A small section of skull bone was then shifted sideways and backward, much like a coach door sliding open, to create an access point for tumor removal. Following the procedure, Ravimbo recovered in a matter of days and was left with only a 6 mm scar at the incision point. 
    Materialise software played a vital role in facilitating this novel procedure. Ferrie is a Biomedical Engineer and 3D Planning Service Lead at Leeds Teaching Hospitals NHS Trust. She used mimics to convert medical scans into digital 3D models of Ravimbo’s skull. This allowed her team to conduct “virtual surgical planning” and practice the procedure in three dimensions, “to see if it’s going to work as we expect.” 
    Ferrie also fabricated life-sized, polyjet 3D printed anatomical models of Ravimbo’s skull for more hands-on surgical preparation. Sheikh and Parmar used these models in the hospital’s cadaver lab to rehearse the procedure until they were confident of a successful outcome. This 3D printing-enabled approach has since been repeated for additional cases, unlocking a new standard of care for patients with previously inoperable brain tumors. 
    The impact of 3D planning is striking. Average operating times fell from 8-12 hours to just 2-3 hours, and average patient discharge times dropped from 7-10 days to 2-3 days. These efficiencies translated into cost savings of £1,780 to £5,758 per case, while additional surgical capacity generated an average of £11,226 in income per operating list.
    Jiten Parmarand Lisa Ferriepresenting at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise.
    Dr. Davide Curione also discussed the value of virtual planning and 3D printing for surgical procedures. Based at Bambino Gesù Pediatric Hospital in Rome, the radiologist’s team conducts 3D modeling, visualization, simulation, and 3D printing. 
    One case involved thoraco-omphalopagus twins joined at the chest and abdomen. Curione’s team 3D printed a multi-color anatomical model of the twins’ anatomy, which he called “the first of its kind for complexity in Italy.” Fabricated in transparent resin, the model offered a detailed view of the twins’ internal anatomy, including the rib cage, lungs, and cardiovascular system.
    Attention then turned to the liver. The team built a digital reconstruction to simulate the optimal resection planes for the general separation and the hepatic splitting procedure. This was followed by a second multi-colour 3D printed model highlighting the organ’s vascularisation. These resources improved surgical planning, cutting operating time by 30%, and enabled a successful separation, with no major complications reported two years post-operation.
    Dr. Davide Curione’s workflow for creating a 3D printed model of thoraco-omphalopagus twins using Mimics. Image via Frontiers in Physiology.
    VR-enabled surgery enhances organ transplants  
    Materialise’s Mimics software can also be used in extended reality, allowing clinicians to interact more intuitively with 3D anatomical models and medical images. By using off-the-shelf virtual realityand augmented realityheadsets, healthcare professionals can more closely examine complex structures in an immersive environment.
    Dr. David Sibřina is a Principal Researcher and Developer for the VRLab team at Prague’s Institute for Clinical and Experimental Medicine. He leads efforts to accelerate the clinical adoption of VR and AR in organ transplantation, surgical planning, and surgical guidance. 
    The former Forbes 30 Under 30 honouree explained that since 2016, IKEM’s 3D printing lab has focused on producing anatomical models to support liver and kidney donor programmes. His lab also fabricates 3D printed anatomical models of ventricles and aneurysms for clinical use. 
    However, Sibřina’s team recently became overwhelmed by high demand for physical models, with surgeons requesting additional 3D model processing options. This led Sibřina to create the IKEM VRLab, offering XR capabilities to help surgeons plan and conduct complex transplantation surgeries and resection procedures.     
    When turning to XR, Sibřina’s lab opted against adopting a ready-made software solution, instead developing its own from scratch. “The problem with some of the commercial solutions is capability and integration,” he explained. “The devices are incredibly difficult and expensive to integrate within medical systems, particularly in public hospitals.” He also pointed to user interface shortcomings and the lack of alignment with established medical protocols. 
    According to Sibřina, IKEM VRLab’s offering is a versatile and scalable VR system that is simple to use and customizable to different surgical disciplines. He described it as “Zoom for 3D planning,” enabling live virtual collaboration between medical professionals. It leverages joint CT and MRI acquisition models, developed with IKEM’s medical physicists and radiologists. Data from patient scans is converted into interactive digital reconstructions that can be leveraged for analysis and surgical planning. 
    IKEM VRLab also offers a virtual “Fitting Room,” which allows surgeons to assess whether a donor’s organ size matches the recipient’s body. A digital model is created for every deceased donor and live recipient’s body, enabling surgeons to perform the size allocation assessments. 
    Sibřina explained that this capability significantly reduces the number of recipients who would otherwise fail to be matched with a suitable donor. For example, 262 deceased liver donors have been processed for Fitting Room size allocations by IKEM VRLab. In 27 instances, the VR Fitting Room prevented potential recipients from being skipped in the waiting list based on standard biometrics, CT axis measurements, and BMI ratios.                         
    Overall, 941 patient-specific visualizations have been performed using Sibřina’s technology. 285were for liver recipients, 311for liver donors, and 299for liver resection. Living liver donors account for 59cases, and split/reduced donors for 21.          
    A forum attendee using Materialise’s Mimics software in augmented reality. Photo via Materialise.
    Personalized healthcare: 3D printing implants and surgical guides 
    Beyond surgical planning and 3D visualisation, Materialise Mimics software supports the design and production of patient-specific implants and surgical guides. The company conducts healthcare contract manufacturing at its Leuven HQ and medical 3D printing facility in Plymouth, Michigan. 
    Hospitals can design patient-specific medical devices in-house or collaborate with Materialise’s clinical engineers to develop custom components. Materialise then 3D prints these devices and ships them for clinical use. The Belgian company, headed by CEO Brigitte de Vet-Veithen, produces around 280,000 custom medical instruments each year, with 160,000 destined for the US market. These include personalised titanium cranio-maxillofacialimplants for facial reconstruction and colour-coded surgical guides.
    Poole Hospital’s 3D specialists, Sian Campbell and Poppy Taylor-Crawford, shared how their team has adopted Materialise software to support complex CMF surgeries. Since acquiring the platform in 2022, they have developed digital workflows for planning and 3D printing patient-specific implants and surgical guides in 14 cases, particularly for facial reconstruction. 
    Campbell and Taylor-Crawford begin their workflow by importing patient CT and MRI data into Materialise’s Mimics Enlight CMF software. Automated tools handle initial segmentation, tumour resection planning, and the creation of cutting planes. For more complex cases involving fibula or scapula grafts, the team adapts these workflows to ensure precise alignment and fit of the bone graft within the defect.
    Next, the surgical plan and anatomical data are transferred to Materialise 3-matic, where the team designs patient-specific resection guides, reconstruction plates, and implants. These designs are refined through close collaboration with surgeons, incorporating feedback to optimise geometry and fit. Virtual fit checks verify guide accuracy, while further analysis ensures compatibility with surgical instruments and operating constraints. Once validated, the guides and implants are 3D printed for surgery.
    According to Campbell and Taylor-Crawford, these custom devices enable more accurate resections and implant placements. This improves surgical alignment and reduces theatre time by minimising intraoperative adjustments.
    An example of the cranio-maxillofacial implants and surgical guides 3D printed by Materialise. Photo by 3D Printing Industry
    Custom 3D printed implants are also fabricated at the Rizzoli Orthopaedic Institute in Bologna, Italy. Originally established as a motion analysis lab, the institute has expanded its expertise into surgical planning, biomechanical analysis, and now, personalized 3D printed implant design.
    Dr. Alberto Leardini, Director of the Movement Analysis Laboratory, described his team’s patient-specific implant workflow. They combine CT and MRI scans to identify bone defects and tumour locations. Clinical engineers then use this data to build digital models and plan resections. They also design cutting guides and custom implants tailored to each patient’s anatomy.
    These designs are refined in collaboration with surgeons before being outsourced to manufacturing partners for production. Importantly, this workflow internalizes design and planning phases. By hosting engineering and clinical teams together on-site, they aim to streamline decision-making and reduce lead times. Once the digital design is finalised, only the additive manufacturing step is outsourced, ensuring “zero distance” collaboration between teams. 
    Dr. Leardini emphasised that this approach improves clinical outcomes and promises economic benefits. While custom implants require more imaging and upfront planning, they reduce time in the operating theatre, shorten hospital stays, and minimise patient transfers. 
    After a full day of presentations inside the Irish College’s eighteenth-century chapel, the consensus was clear. 3D technology is not a niche capability reserved for high-end procedures, but a valuable tool enhancing everyday care for thousands of patients globally. From faster surgeries to cost savings and personalized treatments, hospitals are increasingly embedding 3D technology into routine care. Materialise’s software sits at the heart of this shift, enabling clinicians to deliver safer, smarter, and more efficient healthcare. 
    Take the 3DPI Reader Survey – shape the future of AM reporting in under 5 minutes.
    Read all the 3D printing news from RAPID + TCT 2025
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows 3D printed anatomical models at Materialise HQ in Leuven. Photo by 3D Printing Industry.
    #how #elevates #healthcare #insights #materialise
    How AM Elevates Healthcare: Insights from the Materialise 3D Printing in Hospitals Forum 2025
    The cobbled streets and centuries-old university halls of Leuven recently served as a picturesque backdrop for the Materialise 3D Printing in Hospitals Forum 2025. Belgium’s Flemish Brabant capital hosted the annual meeting, which has become a key gathering for the medical 3D printing community since its launch in 2017. This year, 140 international healthcare professionals convened for two days of talks, workshops, and lively discussion on how Materialise’s software enhances patient care. The Forum’s opening day, hosted at Leuven’s historic Irish College, featured 16 presentations by 18 healthcare clinicians and medical 3D printing experts.  While often described as the future of medicine, personalized healthcare has already become routine in many clinical settings. Speakers emphasized that 3D printing is no longer merely a “cool” innovation, but an essential tool that improves patient outcomes. “Personalized treatment is not just a vision for the future,” said Koen Peters, Executive Vice President Medical at Materialise. “It’s a reality we’re building together every day.” During the forum, practitioners and clinical engineers demonstrated the critical role of Materialise’s software in medical workflows. Presentations highlighted value across a wide range of procedures, from brain tumour removal and organ transplantation to the separation of conjoined twins and maxillofacial implant surgeries. Several use cases demonstrated how 3D technology can reduce surgery times by up to four times, enhance patient recovery, and cut hospital costs by almost £6,000 per case.      140 visitors attended the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise. Digital simulation and 3D printing slash operating times  Headquartered a few miles outside Leuven’s medieval center, Materialise is a global leader in medical 3D printing and digital planning. Its Mimics software suite automatically converts CT and MRI scans into detailed 3D models. Clinicians use these tools to prepare for procedures, analyse anatomy, and create patient-specific models that enhance surgical planning. So far, Materialise software has supported more than 500,000 patients and analysed over 6 million medical scans. One case that generated notable interest among the Forum’s attendees was that of Lisa Ferrie and Jiten Parmar from Leeds General Infirmary. The pair worked alongside Asim Sheikh, a Consultant Skullbase and Neurovascular Neurosurgeon, to conduct the UK’s first “coach door osteotomy” on Ruvimbo Kaviya, a 40-year-old nurse from Leeds.  This novel keyhole surgery successfully removed a brain tumor from Kaviya’s cavernous sinus, a hard-to-reach area behind the eyes. Most surgeries of this kind require large incisions and the removal of substantial skull sections, resulting in extended recovery time and the risk of postoperative complications. Such an approach would have presented serious risks for removing Kaviya’s tumor, which “was in a complex area surrounded by a lot of nerves,” explained Parmar, a Consultant in Maxillofacial Surgery.    Instead, the Leeds-based team uses a minimally invasive technique that requires only a 1.5 cm incision near the side of Ravimbo’s eyelid. A small section of skull bone was then shifted sideways and backward, much like a coach door sliding open, to create an access point for tumor removal. Following the procedure, Ravimbo recovered in a matter of days and was left with only a 6 mm scar at the incision point.  Materialise software played a vital role in facilitating this novel procedure. Ferrie is a Biomedical Engineer and 3D Planning Service Lead at Leeds Teaching Hospitals NHS Trust. She used mimics to convert medical scans into digital 3D models of Ravimbo’s skull. This allowed her team to conduct “virtual surgical planning” and practice the procedure in three dimensions, “to see if it’s going to work as we expect.”  Ferrie also fabricated life-sized, polyjet 3D printed anatomical models of Ravimbo’s skull for more hands-on surgical preparation. Sheikh and Parmar used these models in the hospital’s cadaver lab to rehearse the procedure until they were confident of a successful outcome. This 3D printing-enabled approach has since been repeated for additional cases, unlocking a new standard of care for patients with previously inoperable brain tumors.  The impact of 3D planning is striking. Average operating times fell from 8-12 hours to just 2-3 hours, and average patient discharge times dropped from 7-10 days to 2-3 days. These efficiencies translated into cost savings of £1,780 to £5,758 per case, while additional surgical capacity generated an average of £11,226 in income per operating list. Jiten Parmarand Lisa Ferriepresenting at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise. Dr. Davide Curione also discussed the value of virtual planning and 3D printing for surgical procedures. Based at Bambino Gesù Pediatric Hospital in Rome, the radiologist’s team conducts 3D modeling, visualization, simulation, and 3D printing.  One case involved thoraco-omphalopagus twins joined at the chest and abdomen. Curione’s team 3D printed a multi-color anatomical model of the twins’ anatomy, which he called “the first of its kind for complexity in Italy.” Fabricated in transparent resin, the model offered a detailed view of the twins’ internal anatomy, including the rib cage, lungs, and cardiovascular system. Attention then turned to the liver. The team built a digital reconstruction to simulate the optimal resection planes for the general separation and the hepatic splitting procedure. This was followed by a second multi-colour 3D printed model highlighting the organ’s vascularisation. These resources improved surgical planning, cutting operating time by 30%, and enabled a successful separation, with no major complications reported two years post-operation. Dr. Davide Curione’s workflow for creating a 3D printed model of thoraco-omphalopagus twins using Mimics. Image via Frontiers in Physiology. VR-enabled surgery enhances organ transplants   Materialise’s Mimics software can also be used in extended reality, allowing clinicians to interact more intuitively with 3D anatomical models and medical images. By using off-the-shelf virtual realityand augmented realityheadsets, healthcare professionals can more closely examine complex structures in an immersive environment. Dr. David Sibřina is a Principal Researcher and Developer for the VRLab team at Prague’s Institute for Clinical and Experimental Medicine. He leads efforts to accelerate the clinical adoption of VR and AR in organ transplantation, surgical planning, and surgical guidance.  The former Forbes 30 Under 30 honouree explained that since 2016, IKEM’s 3D printing lab has focused on producing anatomical models to support liver and kidney donor programmes. His lab also fabricates 3D printed anatomical models of ventricles and aneurysms for clinical use.  However, Sibřina’s team recently became overwhelmed by high demand for physical models, with surgeons requesting additional 3D model processing options. This led Sibřina to create the IKEM VRLab, offering XR capabilities to help surgeons plan and conduct complex transplantation surgeries and resection procedures.      When turning to XR, Sibřina’s lab opted against adopting a ready-made software solution, instead developing its own from scratch. “The problem with some of the commercial solutions is capability and integration,” he explained. “The devices are incredibly difficult and expensive to integrate within medical systems, particularly in public hospitals.” He also pointed to user interface shortcomings and the lack of alignment with established medical protocols.  According to Sibřina, IKEM VRLab’s offering is a versatile and scalable VR system that is simple to use and customizable to different surgical disciplines. He described it as “Zoom for 3D planning,” enabling live virtual collaboration between medical professionals. It leverages joint CT and MRI acquisition models, developed with IKEM’s medical physicists and radiologists. Data from patient scans is converted into interactive digital reconstructions that can be leveraged for analysis and surgical planning.  IKEM VRLab also offers a virtual “Fitting Room,” which allows surgeons to assess whether a donor’s organ size matches the recipient’s body. A digital model is created for every deceased donor and live recipient’s body, enabling surgeons to perform the size allocation assessments.  Sibřina explained that this capability significantly reduces the number of recipients who would otherwise fail to be matched with a suitable donor. For example, 262 deceased liver donors have been processed for Fitting Room size allocations by IKEM VRLab. In 27 instances, the VR Fitting Room prevented potential recipients from being skipped in the waiting list based on standard biometrics, CT axis measurements, and BMI ratios.                          Overall, 941 patient-specific visualizations have been performed using Sibřina’s technology. 285were for liver recipients, 311for liver donors, and 299for liver resection. Living liver donors account for 59cases, and split/reduced donors for 21.           A forum attendee using Materialise’s Mimics software in augmented reality. Photo via Materialise. Personalized healthcare: 3D printing implants and surgical guides  Beyond surgical planning and 3D visualisation, Materialise Mimics software supports the design and production of patient-specific implants and surgical guides. The company conducts healthcare contract manufacturing at its Leuven HQ and medical 3D printing facility in Plymouth, Michigan.  Hospitals can design patient-specific medical devices in-house or collaborate with Materialise’s clinical engineers to develop custom components. Materialise then 3D prints these devices and ships them for clinical use. The Belgian company, headed by CEO Brigitte de Vet-Veithen, produces around 280,000 custom medical instruments each year, with 160,000 destined for the US market. These include personalised titanium cranio-maxillofacialimplants for facial reconstruction and colour-coded surgical guides. Poole Hospital’s 3D specialists, Sian Campbell and Poppy Taylor-Crawford, shared how their team has adopted Materialise software to support complex CMF surgeries. Since acquiring the platform in 2022, they have developed digital workflows for planning and 3D printing patient-specific implants and surgical guides in 14 cases, particularly for facial reconstruction.  Campbell and Taylor-Crawford begin their workflow by importing patient CT and MRI data into Materialise’s Mimics Enlight CMF software. Automated tools handle initial segmentation, tumour resection planning, and the creation of cutting planes. For more complex cases involving fibula or scapula grafts, the team adapts these workflows to ensure precise alignment and fit of the bone graft within the defect. Next, the surgical plan and anatomical data are transferred to Materialise 3-matic, where the team designs patient-specific resection guides, reconstruction plates, and implants. These designs are refined through close collaboration with surgeons, incorporating feedback to optimise geometry and fit. Virtual fit checks verify guide accuracy, while further analysis ensures compatibility with surgical instruments and operating constraints. Once validated, the guides and implants are 3D printed for surgery. According to Campbell and Taylor-Crawford, these custom devices enable more accurate resections and implant placements. This improves surgical alignment and reduces theatre time by minimising intraoperative adjustments. An example of the cranio-maxillofacial implants and surgical guides 3D printed by Materialise. Photo by 3D Printing Industry Custom 3D printed implants are also fabricated at the Rizzoli Orthopaedic Institute in Bologna, Italy. Originally established as a motion analysis lab, the institute has expanded its expertise into surgical planning, biomechanical analysis, and now, personalized 3D printed implant design. Dr. Alberto Leardini, Director of the Movement Analysis Laboratory, described his team’s patient-specific implant workflow. They combine CT and MRI scans to identify bone defects and tumour locations. Clinical engineers then use this data to build digital models and plan resections. They also design cutting guides and custom implants tailored to each patient’s anatomy. These designs are refined in collaboration with surgeons before being outsourced to manufacturing partners for production. Importantly, this workflow internalizes design and planning phases. By hosting engineering and clinical teams together on-site, they aim to streamline decision-making and reduce lead times. Once the digital design is finalised, only the additive manufacturing step is outsourced, ensuring “zero distance” collaboration between teams.  Dr. Leardini emphasised that this approach improves clinical outcomes and promises economic benefits. While custom implants require more imaging and upfront planning, they reduce time in the operating theatre, shorten hospital stays, and minimise patient transfers.  After a full day of presentations inside the Irish College’s eighteenth-century chapel, the consensus was clear. 3D technology is not a niche capability reserved for high-end procedures, but a valuable tool enhancing everyday care for thousands of patients globally. From faster surgeries to cost savings and personalized treatments, hospitals are increasingly embedding 3D technology into routine care. Materialise’s software sits at the heart of this shift, enabling clinicians to deliver safer, smarter, and more efficient healthcare.  Take the 3DPI Reader Survey – shape the future of AM reporting in under 5 minutes. Read all the 3D printing news from RAPID + TCT 2025 Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows 3D printed anatomical models at Materialise HQ in Leuven. Photo by 3D Printing Industry. #how #elevates #healthcare #insights #materialise
    3DPRINTINGINDUSTRY.COM
    How AM Elevates Healthcare: Insights from the Materialise 3D Printing in Hospitals Forum 2025
    The cobbled streets and centuries-old university halls of Leuven recently served as a picturesque backdrop for the Materialise 3D Printing in Hospitals Forum 2025. Belgium’s Flemish Brabant capital hosted the annual meeting, which has become a key gathering for the medical 3D printing community since its launch in 2017. This year, 140 international healthcare professionals convened for two days of talks, workshops, and lively discussion on how Materialise’s software enhances patient care. The Forum’s opening day, hosted at Leuven’s historic Irish College, featured 16 presentations by 18 healthcare clinicians and medical 3D printing experts.  While often described as the future of medicine, personalized healthcare has already become routine in many clinical settings. Speakers emphasized that 3D printing is no longer merely a “cool” innovation, but an essential tool that improves patient outcomes. “Personalized treatment is not just a vision for the future,” said Koen Peters, Executive Vice President Medical at Materialise. “It’s a reality we’re building together every day.” During the forum, practitioners and clinical engineers demonstrated the critical role of Materialise’s software in medical workflows. Presentations highlighted value across a wide range of procedures, from brain tumour removal and organ transplantation to the separation of conjoined twins and maxillofacial implant surgeries. Several use cases demonstrated how 3D technology can reduce surgery times by up to four times, enhance patient recovery, and cut hospital costs by almost £6,000 per case.      140 visitors attended the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise. Digital simulation and 3D printing slash operating times  Headquartered a few miles outside Leuven’s medieval center, Materialise is a global leader in medical 3D printing and digital planning. Its Mimics software suite automatically converts CT and MRI scans into detailed 3D models. Clinicians use these tools to prepare for procedures, analyse anatomy, and create patient-specific models that enhance surgical planning. So far, Materialise software has supported more than 500,000 patients and analysed over 6 million medical scans. One case that generated notable interest among the Forum’s attendees was that of Lisa Ferrie and Jiten Parmar from Leeds General Infirmary. The pair worked alongside Asim Sheikh, a Consultant Skullbase and Neurovascular Neurosurgeon, to conduct the UK’s first “coach door osteotomy” on Ruvimbo Kaviya, a 40-year-old nurse from Leeds.  This novel keyhole surgery successfully removed a brain tumor from Kaviya’s cavernous sinus, a hard-to-reach area behind the eyes. Most surgeries of this kind require large incisions and the removal of substantial skull sections, resulting in extended recovery time and the risk of postoperative complications. Such an approach would have presented serious risks for removing Kaviya’s tumor, which “was in a complex area surrounded by a lot of nerves,” explained Parmar, a Consultant in Maxillofacial Surgery.    Instead, the Leeds-based team uses a minimally invasive technique that requires only a 1.5 cm incision near the side of Ravimbo’s eyelid. A small section of skull bone was then shifted sideways and backward, much like a coach door sliding open, to create an access point for tumor removal. Following the procedure, Ravimbo recovered in a matter of days and was left with only a 6 mm scar at the incision point.  Materialise software played a vital role in facilitating this novel procedure. Ferrie is a Biomedical Engineer and 3D Planning Service Lead at Leeds Teaching Hospitals NHS Trust. She used mimics to convert medical scans into digital 3D models of Ravimbo’s skull. This allowed her team to conduct “virtual surgical planning” and practice the procedure in three dimensions, “to see if it’s going to work as we expect.”  Ferrie also fabricated life-sized, polyjet 3D printed anatomical models of Ravimbo’s skull for more hands-on surgical preparation. Sheikh and Parmar used these models in the hospital’s cadaver lab to rehearse the procedure until they were confident of a successful outcome. This 3D printing-enabled approach has since been repeated for additional cases, unlocking a new standard of care for patients with previously inoperable brain tumors.  The impact of 3D planning is striking. Average operating times fell from 8-12 hours to just 2-3 hours, and average patient discharge times dropped from 7-10 days to 2-3 days. These efficiencies translated into cost savings of £1,780 to £5,758 per case, while additional surgical capacity generated an average of £11,226 in income per operating list. Jiten Parmar (right) and Lisa Ferrie (left) presenting at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise. Dr. Davide Curione also discussed the value of virtual planning and 3D printing for surgical procedures. Based at Bambino Gesù Pediatric Hospital in Rome, the radiologist’s team conducts 3D modeling, visualization, simulation, and 3D printing.  One case involved thoraco-omphalopagus twins joined at the chest and abdomen. Curione’s team 3D printed a multi-color anatomical model of the twins’ anatomy, which he called “the first of its kind for complexity in Italy.” Fabricated in transparent resin, the model offered a detailed view of the twins’ internal anatomy, including the rib cage, lungs, and cardiovascular system. Attention then turned to the liver. The team built a digital reconstruction to simulate the optimal resection planes for the general separation and the hepatic splitting procedure. This was followed by a second multi-colour 3D printed model highlighting the organ’s vascularisation. These resources improved surgical planning, cutting operating time by 30%, and enabled a successful separation, with no major complications reported two years post-operation. Dr. Davide Curione’s workflow for creating a 3D printed model of thoraco-omphalopagus twins using Mimics. Image via Frontiers in Physiology. VR-enabled surgery enhances organ transplants   Materialise’s Mimics software can also be used in extended reality (XR), allowing clinicians to interact more intuitively with 3D anatomical models and medical images. By using off-the-shelf virtual reality (VR) and augmented reality (AR) headsets, healthcare professionals can more closely examine complex structures in an immersive environment. Dr. David Sibřina is a Principal Researcher and Developer for the VRLab team at Prague’s Institute for Clinical and Experimental Medicine (IKEM). He leads efforts to accelerate the clinical adoption of VR and AR in organ transplantation, surgical planning, and surgical guidance.  The former Forbes 30 Under 30 honouree explained that since 2016, IKEM’s 3D printing lab has focused on producing anatomical models to support liver and kidney donor programmes. His lab also fabricates 3D printed anatomical models of ventricles and aneurysms for clinical use.  However, Sibřina’s team recently became overwhelmed by high demand for physical models, with surgeons requesting additional 3D model processing options. This led Sibřina to create the IKEM VRLab, offering XR capabilities to help surgeons plan and conduct complex transplantation surgeries and resection procedures.      When turning to XR, Sibřina’s lab opted against adopting a ready-made software solution, instead developing its own from scratch. “The problem with some of the commercial solutions is capability and integration,” he explained. “The devices are incredibly difficult and expensive to integrate within medical systems, particularly in public hospitals.” He also pointed to user interface shortcomings and the lack of alignment with established medical protocols.  According to Sibřina, IKEM VRLab’s offering is a versatile and scalable VR system that is simple to use and customizable to different surgical disciplines. He described it as “Zoom for 3D planning,” enabling live virtual collaboration between medical professionals. It leverages joint CT and MRI acquisition models, developed with IKEM’s medical physicists and radiologists. Data from patient scans is converted into interactive digital reconstructions that can be leveraged for analysis and surgical planning.  IKEM VRLab also offers a virtual “Fitting Room,” which allows surgeons to assess whether a donor’s organ size matches the recipient’s body. A digital model is created for every deceased donor and live recipient’s body, enabling surgeons to perform the size allocation assessments.  Sibřina explained that this capability significantly reduces the number of recipients who would otherwise fail to be matched with a suitable donor. For example, 262 deceased liver donors have been processed for Fitting Room size allocations by IKEM VRLab. In 27 instances, the VR Fitting Room prevented potential recipients from being skipped in the waiting list based on standard biometrics, CT axis measurements, and BMI ratios.                          Overall, 941 patient-specific visualizations have been performed using Sibřina’s technology. 285 (28%) were for liver recipients, 311 (31%) for liver donors, and 299 (23%) for liver resection. Living liver donors account for 59 (6%) cases, and split/reduced donors for 21 (2%).           A forum attendee using Materialise’s Mimics software in augmented reality (AR). Photo via Materialise. Personalized healthcare: 3D printing implants and surgical guides  Beyond surgical planning and 3D visualisation, Materialise Mimics software supports the design and production of patient-specific implants and surgical guides. The company conducts healthcare contract manufacturing at its Leuven HQ and medical 3D printing facility in Plymouth, Michigan.  Hospitals can design patient-specific medical devices in-house or collaborate with Materialise’s clinical engineers to develop custom components. Materialise then 3D prints these devices and ships them for clinical use. The Belgian company, headed by CEO Brigitte de Vet-Veithen, produces around 280,000 custom medical instruments each year, with 160,000 destined for the US market. These include personalised titanium cranio-maxillofacial (CMF) implants for facial reconstruction and colour-coded surgical guides. Poole Hospital’s 3D specialists, Sian Campbell and Poppy Taylor-Crawford, shared how their team has adopted Materialise software to support complex CMF surgeries. Since acquiring the platform in 2022, they have developed digital workflows for planning and 3D printing patient-specific implants and surgical guides in 14 cases, particularly for facial reconstruction.  Campbell and Taylor-Crawford begin their workflow by importing patient CT and MRI data into Materialise’s Mimics Enlight CMF software. Automated tools handle initial segmentation, tumour resection planning, and the creation of cutting planes. For more complex cases involving fibula or scapula grafts, the team adapts these workflows to ensure precise alignment and fit of the bone graft within the defect. Next, the surgical plan and anatomical data are transferred to Materialise 3-matic, where the team designs patient-specific resection guides, reconstruction plates, and implants. These designs are refined through close collaboration with surgeons, incorporating feedback to optimise geometry and fit. Virtual fit checks verify guide accuracy, while further analysis ensures compatibility with surgical instruments and operating constraints. Once validated, the guides and implants are 3D printed for surgery. According to Campbell and Taylor-Crawford, these custom devices enable more accurate resections and implant placements. This improves surgical alignment and reduces theatre time by minimising intraoperative adjustments. An example of the cranio-maxillofacial implants and surgical guides 3D printed by Materialise. Photo by 3D Printing Industry Custom 3D printed implants are also fabricated at the Rizzoli Orthopaedic Institute in Bologna, Italy. Originally established as a motion analysis lab, the institute has expanded its expertise into surgical planning, biomechanical analysis, and now, personalized 3D printed implant design. Dr. Alberto Leardini, Director of the Movement Analysis Laboratory, described his team’s patient-specific implant workflow. They combine CT and MRI scans to identify bone defects and tumour locations. Clinical engineers then use this data to build digital models and plan resections. They also design cutting guides and custom implants tailored to each patient’s anatomy. These designs are refined in collaboration with surgeons before being outsourced to manufacturing partners for production. Importantly, this workflow internalizes design and planning phases. By hosting engineering and clinical teams together on-site, they aim to streamline decision-making and reduce lead times. Once the digital design is finalised, only the additive manufacturing step is outsourced, ensuring “zero distance” collaboration between teams.  Dr. Leardini emphasised that this approach improves clinical outcomes and promises economic benefits. While custom implants require more imaging and upfront planning, they reduce time in the operating theatre, shorten hospital stays, and minimise patient transfers.  After a full day of presentations inside the Irish College’s eighteenth-century chapel, the consensus was clear. 3D technology is not a niche capability reserved for high-end procedures, but a valuable tool enhancing everyday care for thousands of patients globally. From faster surgeries to cost savings and personalized treatments, hospitals are increasingly embedding 3D technology into routine care. Materialise’s software sits at the heart of this shift, enabling clinicians to deliver safer, smarter, and more efficient healthcare.  Take the 3DPI Reader Survey – shape the future of AM reporting in under 5 minutes. Read all the 3D printing news from RAPID + TCT 2025 Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows 3D printed anatomical models at Materialise HQ in Leuven. Photo by 3D Printing Industry.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • The Download: Montana’s experimental treatments, and Google DeepMind’s new AI agent

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

    The first US hub for experimental medical treatments is coming

    The news: A bill that allows clinics to sell unproven treatments has been passed in Montana. Under the legislation, doctors can apply for a license to open an experimental treatment clinic and recommend and sell therapies not approved by the Food and Drug Administrationto their patients.

    Why it matters: Once it’s signed by the governor, the law will be the most expansive in the country in allowing access to drugs that have not been fully tested. The bill allows for any drug produced in the state to be sold in it, providing it has been through phase I clinical trials—but these trials do not determine if the drug is effective.The big picture: The bill was drafted and lobbied for by people interested in extending human lifespans. And these longevity enthusiasts are hoping Montana will serve as a test bed for opening up access to experimental drugs. Read the full story.

    —Jessica Hamzelou

    Google DeepMind’s new AI agent cracks real-world problems better than humans can

    Google DeepMind has once again used large language models to discover new solutions to long-standing problems in math and computer science. This time the firm has shown that its approach can not only tackle unsolved theoretical puzzles, but improve a range of important real-world processes as well.

    The new tool, called AlphaEvolve, uses large language modelsto produce code for a wide range of different tasks. LLMs are known to be hit and miss at coding. The twist here is that AlphaEvolve scores each of Gemini’s suggestions, throwing out the bad and tweaking the good, in an iterative process, until it has produced the best algorithm it can. In many cases, the results are more efficient or more accurate than the best existingsolutions.Read the full story.

    —Will Douglas Heaven

    Research cuts are threatening crucial climate data

    —Casey Crownhart

    Over the last few weeks, there’s been an explosion of news about proposed budget cuts to science in the US. Researchers and civil servants are sounding the alarm that those cuts mean we might lose key data that helps us understand our world and how climate change is affecting it.

    Long-running US government programs that monitor the snowpack across the West are among those being threatened by cuts across the US federal government, as my colleague James Temple’s new story explores. Also potentially in trouble: carbon dioxide measurements in Hawaii, hurricane forecasting tools, and a database that tracks the economic impact of natural disasters. 

    It’s all got me thinking: What do we lose when data is in danger? Read the full story.

    This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 Donald Trump doesn’t want Apple building iPhones in IndiaThe US President claims Apple will be upping their US production as a result.+ He also said that India was willing to “literally charge us no tariffs.”2 Elon Musk’s Grok chatbot ranted about white genocide In response to completely unrelated queries.+ It’s not the first time Grok has shared questionable responses.+ Grok told users it was instructed to accept white genocide as real.3 RFK Jr doesn’t think we should take his medical adviceWhich begs the question: why is he US Health and Human Services secretary?+ Kennedy said his opinions on vaccines are irrelevant.+ He defended his decision to downsize the health department amid protests.4 GM’s new EV battery can power a truck for more than 400 miles Its lithium manganese-rich cells use cheaper minerals than lithium-ion ones.+ Tariffs are bad news for batteries.5 Anthropic has been accused of using AI-generated evidence in a legal caseA lawyer for Universal Music Group claimed an expert cited a source that didn’t exist.+ A judge in another case reportedly caught fake AI citations, too.+ AI companies are finally being forced to cough up for training data.6 AI won’t put human radiologists out of a job any time soonThe technology is helpful, but is unable to do everything trained human experts can.+ Why it’s so hard to use AI to diagnose cancer.7 The US Defense Department wants faster aircraft and missilesAnd startups are more than willing to answer the call.+ Phase two of military AI has arrived.8 SpaceX has successfully tested its Starship rocket Clearing a major hurdle ahead of its planned launch later this month.9 YouTube will start inserting ads into videos’ crucial momentsWow, that doesn’t sound annoying at all.10 Apple’s Vision Pro headset is a pain in the neckAnd early adopters are regretting shelling out apiece.+ Maybe the ability to scroll using their eyes will change their minds.Quote of the day

    “To say a professor is ‘some kind of monster’ for using AI to generate slides “is, to me, ridiculous.”

    —Paul Shovlin, a professor at Ohio University, reacts to student backlash against professors using AI to create teaching materials, the New York Times reports.

    One more thing

    Who gets to decide who receives experimental medical treatments?There has been a trend toward lowering the bar for new medicines, and it is becoming easier for people to access treatments that might not help them—and could even harm them. Anecdotes appear to be overpowering evidence in decisions on drug approval. As a result, we’re ending up with some drugs that don’t work.We urgently need to question how these decisions are made. Who should have access to experimental therapies? And who should get to decide? Such questions are especially pressing considering how quickly biotechnology is advancing. We’re not just improving on existing classes of treatments—we’re creating entirely new ones. Read the full story.

    —Jessica Hamzelou

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ Food nostalgia is the best nostalgia, and this Bluesky account of discontinued foods doesn’t disappoint.+ Don’t even think of calling your newborn baby King if you live in New Zealand.+ Actor Jeremy Strong just loves a bucket hat.+ Watch out Swiss drivers—a duck has been caught speeding
    #download #montanas #experimental #treatments #google
    The Download: Montana’s experimental treatments, and Google DeepMind’s new AI agent
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The first US hub for experimental medical treatments is coming The news: A bill that allows clinics to sell unproven treatments has been passed in Montana. Under the legislation, doctors can apply for a license to open an experimental treatment clinic and recommend and sell therapies not approved by the Food and Drug Administrationto their patients. Why it matters: Once it’s signed by the governor, the law will be the most expansive in the country in allowing access to drugs that have not been fully tested. The bill allows for any drug produced in the state to be sold in it, providing it has been through phase I clinical trials—but these trials do not determine if the drug is effective.The big picture: The bill was drafted and lobbied for by people interested in extending human lifespans. And these longevity enthusiasts are hoping Montana will serve as a test bed for opening up access to experimental drugs. Read the full story. —Jessica Hamzelou Google DeepMind’s new AI agent cracks real-world problems better than humans can Google DeepMind has once again used large language models to discover new solutions to long-standing problems in math and computer science. This time the firm has shown that its approach can not only tackle unsolved theoretical puzzles, but improve a range of important real-world processes as well. The new tool, called AlphaEvolve, uses large language modelsto produce code for a wide range of different tasks. LLMs are known to be hit and miss at coding. The twist here is that AlphaEvolve scores each of Gemini’s suggestions, throwing out the bad and tweaking the good, in an iterative process, until it has produced the best algorithm it can. In many cases, the results are more efficient or more accurate than the best existingsolutions.Read the full story. —Will Douglas Heaven Research cuts are threatening crucial climate data —Casey Crownhart Over the last few weeks, there’s been an explosion of news about proposed budget cuts to science in the US. Researchers and civil servants are sounding the alarm that those cuts mean we might lose key data that helps us understand our world and how climate change is affecting it. Long-running US government programs that monitor the snowpack across the West are among those being threatened by cuts across the US federal government, as my colleague James Temple’s new story explores. Also potentially in trouble: carbon dioxide measurements in Hawaii, hurricane forecasting tools, and a database that tracks the economic impact of natural disasters.  It’s all got me thinking: What do we lose when data is in danger? Read the full story. This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Donald Trump doesn’t want Apple building iPhones in IndiaThe US President claims Apple will be upping their US production as a result.+ He also said that India was willing to “literally charge us no tariffs.”2 Elon Musk’s Grok chatbot ranted about white genocide In response to completely unrelated queries.+ It’s not the first time Grok has shared questionable responses.+ Grok told users it was instructed to accept white genocide as real.3 RFK Jr doesn’t think we should take his medical adviceWhich begs the question: why is he US Health and Human Services secretary?+ Kennedy said his opinions on vaccines are irrelevant.+ He defended his decision to downsize the health department amid protests.4 GM’s new EV battery can power a truck for more than 400 miles Its lithium manganese-rich cells use cheaper minerals than lithium-ion ones.+ Tariffs are bad news for batteries.5 Anthropic has been accused of using AI-generated evidence in a legal caseA lawyer for Universal Music Group claimed an expert cited a source that didn’t exist.+ A judge in another case reportedly caught fake AI citations, too.+ AI companies are finally being forced to cough up for training data.6 AI won’t put human radiologists out of a job any time soonThe technology is helpful, but is unable to do everything trained human experts can.+ Why it’s so hard to use AI to diagnose cancer.7 The US Defense Department wants faster aircraft and missilesAnd startups are more than willing to answer the call.+ Phase two of military AI has arrived.8 SpaceX has successfully tested its Starship rocket Clearing a major hurdle ahead of its planned launch later this month.9 YouTube will start inserting ads into videos’ crucial momentsWow, that doesn’t sound annoying at all.10 Apple’s Vision Pro headset is a pain in the neckAnd early adopters are regretting shelling out apiece.+ Maybe the ability to scroll using their eyes will change their minds.Quote of the day “To say a professor is ‘some kind of monster’ for using AI to generate slides “is, to me, ridiculous.” —Paul Shovlin, a professor at Ohio University, reacts to student backlash against professors using AI to create teaching materials, the New York Times reports. One more thing Who gets to decide who receives experimental medical treatments?There has been a trend toward lowering the bar for new medicines, and it is becoming easier for people to access treatments that might not help them—and could even harm them. Anecdotes appear to be overpowering evidence in decisions on drug approval. As a result, we’re ending up with some drugs that don’t work.We urgently need to question how these decisions are made. Who should have access to experimental therapies? And who should get to decide? Such questions are especially pressing considering how quickly biotechnology is advancing. We’re not just improving on existing classes of treatments—we’re creating entirely new ones. Read the full story. —Jessica Hamzelou We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ Food nostalgia is the best nostalgia, and this Bluesky account of discontinued foods doesn’t disappoint.+ Don’t even think of calling your newborn baby King if you live in New Zealand.+ Actor Jeremy Strong just loves a bucket hat.+ Watch out Swiss drivers—a duck has been caught speeding #download #montanas #experimental #treatments #google
    WWW.TECHNOLOGYREVIEW.COM
    The Download: Montana’s experimental treatments, and Google DeepMind’s new AI agent
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The first US hub for experimental medical treatments is coming The news: A bill that allows clinics to sell unproven treatments has been passed in Montana. Under the legislation, doctors can apply for a license to open an experimental treatment clinic and recommend and sell therapies not approved by the Food and Drug Administration (FDA) to their patients. Why it matters: Once it’s signed by the governor, the law will be the most expansive in the country in allowing access to drugs that have not been fully tested. The bill allows for any drug produced in the state to be sold in it, providing it has been through phase I clinical trials—but these trials do not determine if the drug is effective.The big picture: The bill was drafted and lobbied for by people interested in extending human lifespans. And these longevity enthusiasts are hoping Montana will serve as a test bed for opening up access to experimental drugs. Read the full story. —Jessica Hamzelou Google DeepMind’s new AI agent cracks real-world problems better than humans can Google DeepMind has once again used large language models to discover new solutions to long-standing problems in math and computer science. This time the firm has shown that its approach can not only tackle unsolved theoretical puzzles, but improve a range of important real-world processes as well. The new tool, called AlphaEvolve, uses large language models (LLMs) to produce code for a wide range of different tasks. LLMs are known to be hit and miss at coding. The twist here is that AlphaEvolve scores each of Gemini’s suggestions, throwing out the bad and tweaking the good, in an iterative process, until it has produced the best algorithm it can. In many cases, the results are more efficient or more accurate than the best existing (human-written) solutions.Read the full story. —Will Douglas Heaven Research cuts are threatening crucial climate data —Casey Crownhart Over the last few weeks, there’s been an explosion of news about proposed budget cuts to science in the US. Researchers and civil servants are sounding the alarm that those cuts mean we might lose key data that helps us understand our world and how climate change is affecting it. Long-running US government programs that monitor the snowpack across the West are among those being threatened by cuts across the US federal government, as my colleague James Temple’s new story explores. Also potentially in trouble: carbon dioxide measurements in Hawaii, hurricane forecasting tools, and a database that tracks the economic impact of natural disasters.  It’s all got me thinking: What do we lose when data is in danger? Read the full story. This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Donald Trump doesn’t want Apple building iPhones in IndiaThe US President claims Apple will be upping their US production as a result. (Bloomberg $)+ He also said that India was willing to “literally charge us no tariffs.” (WSJ $) 2 Elon Musk’s Grok chatbot ranted about white genocide In response to completely unrelated queries. (FT $)+ It’s not the first time Grok has shared questionable responses. (Bloomberg $)+ Grok told users it was instructed to accept white genocide as real. (The Guardian) 3 RFK Jr doesn’t think we should take his medical adviceWhich begs the question: why is he US Health and Human Services secretary? (NY Mag $)+ Kennedy said his opinions on vaccines are irrelevant. (NYT $)+ He defended his decision to downsize the health department amid protests. (The Guardian) 4 GM’s new EV battery can power a truck for more than 400 miles Its lithium manganese-rich cells use cheaper minerals than lithium-ion ones. (Fast Company $)+ Tariffs are bad news for batteries. (MIT Technology Review) 5 Anthropic has been accused of using AI-generated evidence in a legal caseA lawyer for Universal Music Group claimed an expert cited a source that didn’t exist. (Reuters)+ A judge in another case reportedly caught fake AI citations, too. (Ars Technica)+ AI companies are finally being forced to cough up for training data. (MIT Technology Review) 6 AI won’t put human radiologists out of a job any time soonThe technology is helpful, but is unable to do everything trained human experts can. (NYT $)+ Why it’s so hard to use AI to diagnose cancer. (MIT Technology Review) 7 The US Defense Department wants faster aircraft and missilesAnd startups are more than willing to answer the call. (WP $)+ Phase two of military AI has arrived. (MIT Technology Review)8 SpaceX has successfully tested its Starship rocket Clearing a major hurdle ahead of its planned launch later this month. (Wired $) 9 YouTube will start inserting ads into videos’ crucial momentsWow, that doesn’t sound annoying at all. (TechCrunch) 10 Apple’s Vision Pro headset is a pain in the neckAnd early adopters are regretting shelling out $3,500 apiece. (WSJ $)+ Maybe the ability to scroll using their eyes will change their minds. (Bloomberg $) Quote of the day “To say a professor is ‘some kind of monster’ for using AI to generate slides “is, to me, ridiculous.” —Paul Shovlin, a professor at Ohio University, reacts to student backlash against professors using AI to create teaching materials, the New York Times reports. One more thing Who gets to decide who receives experimental medical treatments?There has been a trend toward lowering the bar for new medicines, and it is becoming easier for people to access treatments that might not help them—and could even harm them. Anecdotes appear to be overpowering evidence in decisions on drug approval. As a result, we’re ending up with some drugs that don’t work.We urgently need to question how these decisions are made. Who should have access to experimental therapies? And who should get to decide? Such questions are especially pressing considering how quickly biotechnology is advancing. We’re not just improving on existing classes of treatments—we’re creating entirely new ones. Read the full story. —Jessica Hamzelou We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Food nostalgia is the best nostalgia, and this Bluesky account of discontinued foods doesn’t disappoint.+ Don’t even think of calling your newborn baby King if you live in New Zealand.+ Actor Jeremy Strong just loves a bucket hat.+ Watch out Swiss drivers—a duck has been caught speeding
    0 Comentários 0 Compartilhamentos 0 Anterior
  • A.I. Was Coming for Radiologists’ Jobs. So Far, They’re Just More Efficient.
    Experts predicted that artificial intelligence would steal radiology jobs.
    But at the Mayo Clinic, the technology has been more friend than foe.
    Source: https://www.nytimes.com/2025/05/14/technology/ai-jobs-radiologists-mayo-clinic.html" style="color: #0066cc;">https://www.nytimes.com/2025/05/14/technology/ai-jobs-radiologists-mayo-clinic.html
    #was #coming #for #radiologists #jobs #far #theyre #just #more #efficient
    A.I. Was Coming for Radiologists’ Jobs. So Far, They’re Just More Efficient.
    Experts predicted that artificial intelligence would steal radiology jobs. But at the Mayo Clinic, the technology has been more friend than foe. Source: https://www.nytimes.com/2025/05/14/technology/ai-jobs-radiologists-mayo-clinic.html #was #coming #for #radiologists #jobs #far #theyre #just #more #efficient
    WWW.NYTIMES.COM
    A.I. Was Coming for Radiologists’ Jobs. So Far, They’re Just More Efficient.
    Experts predicted that artificial intelligence would steal radiology jobs. But at the Mayo Clinic, the technology has been more friend than foe.
    0 Comentários 0 Compartilhamentos 0 Anterior
CGShares https://cgshares.com