Microsoft Academic
Microsoft Academic
Every company has a mission. What's ours? To empower every person and every organization to achieve more. We believe technology can and should be a force for good and that meaningful innovation contributes to a brighter world in the future and today.
1 Les gens qui ont lié ça
22 Articles
2 Photos
0 Vidéos
0 Aperçu
Mises à jour récentes
  • Semantic Telemetry: Understanding how users interact with AI systems
    www.microsoft.com
    AI tools are proving useful across a range of applications, from helping to drive the new era of business transformation to helping artists craft songs. But which applications are providing the most value to users? Well dig into that question in a series of blog posts that introduce the Semantic Telemetry project at Microsoft Research. In this initial post, we will introduce a new data science approach that we will use to analyze topics and task complexity of Copilot in Bing usage.Human-AI interactions can be iterative and complex, requiring a new data science approach to understand user behavior to build and support increasingly high value use cases. Imagine the following chat:Here we see that chats can be complex and span multiple topics, such as event planning, team building, and logistics. Generative AI has ushered in a two-fold paradigm shift. First, LLMs give us a new thing to measure, that is, how people interact with AI systems. Second, they give us a new way to measure those interactions, that is, they give us the capability to understand and make inferences on these interactions, at scale. The Semantic Telemetry project has created new measures to classify human-AI interactions and understand user behavior, contributing to efforts in developing new approaches for measuring generative AI (opens in new tab) across various use cases.Semantic Telemetry is a rethink of traditional telemetryin which data is collected for understanding systemsdesigned for analyzing chat-based AI. We employ an innovative data science methodology that uses a large language model (LLM) to generate meaningful categorical labels, enabling us to gain insights into chat log data.Figure 1: Prompting an LLM to classify a conversation based on LLM generated label taxonomyThis process begins with developing a set of classifications and definitions. We create these classifications by instructing an LLM to generate a short summary of the conversation, and then iteratively prompting the LLM to generate, update, and review classification labels on a batched set of summaries. This process is outlined in the paper: TnT-LLM: Text Mining at Scale with Large Language Models. We then prompt an LLM with these generated classifiers to label new unstructured (and unlabeled) chat log data.Description of LLM generated label taxonomy processWith this approach, we have analyzed how people interact with Copilot in Bing. In this blog, we examine insights into how people are using Copilot in Bing, including how that differs from traditional search engines. Note that all analyses were conducted on anonymous Copilot interactions containing no personal information.TopicsTo get a clear picture of how people are using Copilot in Bing, we need to first classify sessions into topical categories. To do this, we developed a topic classifier. We used the LLM classification approach described above to label the primary topic (domain) for the entire content of the chat. Although a single chat can cover multiple topics, for this analysis, we generated a single label for the primary topic of the conversation. We sampled five million anonymized Copilot in Bing chats during August and September 2024, and found that globally, 21% of all chats were about technology, with a high concentration of these chats in programming and scripting and computers and electronics.Figure 2: Top Copilot in Bing topics based on anonymized data (August-September 2024)Figure 3: Frequent topic summaries in TechnologyFigure 4: Frequent topic summaries in EntertainmentDiving into the technology category, we find a lot of professional tasks in programming and scripting, where users request problem-specific assistance such as fixing a SQL query syntax error. In computers and electronics, we observe users getting help with tasks like adjusting screen brightness and troubleshooting internet connectivity issues. We can compare this with our second most common topic, entertainment, in which we see users seeking information related to personal activities like hiking and game nights.We also note that top topics differ by platform. The figure below depicts topic popularity based on mobile and desktop usage. Mobile device users tend to use the chat for more personal-related tasks such as helping to plant a garden or understanding medical symptoms whereas desktop users conduct more professional tasks like revising an email.Figure 5: Top topics for desktop users and mobile usersMicrosoft research podcastIdeas: AI and democracy with Madeleine Daepp and Robert Osazuwa NessAs the biggest election year in history comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AIs impact on democracy, including the techs use in Taiwan and India.Listen nowOpens in a new tab Search versus CopilotBeyond analyzing topics, we compared Copilot in Bing usage to that of traditional search. Chat extends beyond traditional online search by enabling users to summarize, generate, compare, and analyze information. Human-AI interactions are conversational and more complex than traditional search (Figure 6).Figure 6: Bing Search Query compared to Copilot in Bing ConversationA major differentiation between search and chat is the ability to ask more complex questions, but how can we measure this? We think of complexity as a scale ranging from simply asking chat to look up information to evaluating several ideas. We aim to understand the difficulty of a task if performed by a human without the assistance of AI. To achieve this, we developed the task complexity classifier, which assesses task difficulty using Anderson and Krathwohls Taxonomy of Learning Objectives (opens in new tab). For our analysis, we have grouped the learning objectives into two categories: low complexity and high complexity. Any task more complicated than information lookup is classified as high complexity. Note that this would be very challenging to classify using traditional data science techniques.Description of task complexity and 6 categories of the Anderson and Krathwohls Taxonomy of Learning ObjectivesComparing low versus high complexity tasks, most chat interactions were categorized as high complexity (78.9%), meaning that they were more complex than looking up information. Programming and scripting, marketing and sales, and creative and professional writing are topics in which users engage in higher complexity tasks (Figure 7) such as learning a skill, troubleshooting a problem, or writing an article.Figure 7: Most and least complex topics based on percentage of high complexity tasks.Travel and tourism and history and culture scored lowest in complexity, with users looking up information like flights time and latest news updates.Demo of task complexity and topics on anonymous Copilot interactionsWhen should you use chat instead of search? A 2024 Microsoft Research study: The Use of Generative Search Engines for Knowledge Work and Complex Tasks, suggests that people are seeing value in technical, complex tasks such as web development and data analysis. Bing Search contained more queries with lower complexity focused on non-professional areas, like gaming and entertainment, travel and tourism, and fashion and beauty, while chat had a greater distribution of complex technical tasks. (Figure 8).Figure 8: Comparison of Bing Search and Copilot in Bing for anonymized sample data (May-June 2023)ConclusionLLMs have enabled a new era of high-quality human-AI interaction, and with it, the capability to analyze those same interactions with high fidelity, at scale, and near real-time. We are now able to obtain actionable insight from complex data that is not possible with traditional data science pattern-matching methods. LLM-generated classifications are pushing research into new directions that will ultimately improve user experience and satisfaction when using chat and other user-AI interactions tools.This analysis indicates that Copilot in Bing is enabling users to do more complex work, specifically in areas such as technology. In our next post, we will explore how Copilot in Bing is supporting professional knowledge work and how we can use these measures as indicators for retention and engagement.FOOTNOTE: This research was conducted at the time the feature Copilot in Bing was available as part of the Bing service; since October 2024 Copilot in Bing has been deprecated in favor of the standalone Microsoft Copilot service.References:Krathwohl, D. R. (2002). A Revision of Blooms Taxonomy: An Overview.Theory Into Practice,41(4), 212218. https://doi.org/10.1207/s15430421tip4104_2 (opens in new tab)Opens in a new tab
    0 Commentaires ·0 Parts ·79 Vue
  • The AI Revolution in Medicine, Revisited: An Introduction
    www.microsoft.com
    Transcript[MUSIC]PETER LEE: This is The AI Revolution in Medicine, Revisited. Im Peter Lee, president of Microsoft Research, and Im pretty excited to introduce this series of conversations as part of the Microsoft Research Podcast.About two years ago, with Carey Goldberg and Zak Kohane, we wrote a book, The AI Revolution in Medicine. This was a book that was intended to educate the world of healthcare and the world of medical research about this new thing that was emerging. This idea of generative AI. And we wrote the book in secret. In fact, the whole existence of what we now know of as OpenAIs GPT-4 AI model hadnt been publicly disclosed or revealed to the world. And so when we were working on this book, we had to make some guesses. What is this going to mean for healthcare? If youre a doctor or a nurse, in what ways will AI impact your work? If youre a patient, in what ways could AI change your experience as you try to navigate a complex healthcare system?And so now its been about two years. Two years hence, what did we get right? What did we get wrong? What things have come along much faster than we ever would have dreamed of? What did we miss? And what things have turned out to be much harder than we ever could have realized? And so this series of conversations is going to talk to people in the real world. Well delve into exactly whats happening in the clinic, the patient experience, how people are thinking about safety and regulatory matters, and what this all means for discovery and advancements of medical science. And even then, well have guests that will allow us to look into the futurethe AI advances that are happening now and what is going to happen next.[MUSIC TRANSITIONS TO SERIES THEME][MUSIC FADES]So now, let me just take a step back here to talk about this book project. And Id like to just read the first couple of sentences in Chapter 1, and Chapter 1 is entitled First Contact. And it starts with a quote. Quote, I think that Zak and his mother deserve better than that, unquote. I was being scolded. And while Ive been scolded plenty in my life, for the first time it wasnt a person scolding me; it was an artificial intelligence system. So thats how we started this book, and I wanted to read that because, at least for me, it takes me back to the kind of awe and wonderment in those early days when in secret development, we had access from OpenAI to what we now know of as GPT-4.And what was that quote about? Well, after getting access to GPT-4, I became very interested in what this might mean for healthcare. But I, not being a doctor, knew I needed help. So I had reached out to a good colleague of mine who is a doctor, a pediatric endocrinologist, and head of the bioinformatics department at Harvard Medical School, Dr. Isaac Zak Kohane. And I sought his help. And in our back-and-forth discussions, one of the things that Zak shared with me was an article that he wrote for a magazine where he talked about his use of machine learning in the care of his 90-year-old mother, his 90-year-old mother, wholike many 90-year-old peoplewas having some health issues.And this article was very interesting. It really went into some detail about not only the machine learning technology that Zak had created in order to help manage his mothers health but also the kind of emotional burden of doing this and in what ways technology was helping Zak cope with that. And so as I read that article, it touched me because at that time, I was struggling in a very similar way with my own father, who was at that time 89 years old and was also suffering from some very significant health issues. And, like Zak, I was feeling some pangs of guilt because my father was living in Southern California; I was way up in the Pacific Northwest, you know, just feeling guilty not being there, present for him, through his struggles. And reading that article a thought that occurred to me was, I wonder if in the future, AI could pretend to be me so that my father could always have a version of me to talk to. And I also had the thought in the other direction. Could AI someday capture enough of my father so that when and if he passes, I always have some memory of my father that I could interact with? A strange and bizarre thought, I admit, but a natural one, I think, for any human being thats encountering this amazing AI technology for the first time. And so I ran an experiment. I used GPT-4 to read Zaks article and then posed the question to GPT-4, Based on this article, could you pretend to be Zak? Ill pretend to be Zaks mother, and lets test whether its possible to have a mother-son conversation.To my surprise, GPT-4s response at that time was to scold me, basically saying that this is wrong; that this has a lot of dangers and risks. You know, what if Zaks mother really needs the real Zak. And in those early days of this encounter with AI, that was incredibly startling. It just really forces you to reexamine yourself, and it kicked off our writing in the book as really not only being about a technology that could help lead to better diagnoses, help reduce medical errors, reduce the amount of paperwork and clerical burden that doctors go through, could help demystify and help patients navigate a healthcare system, but it could actually be a technology that forces people to reexamine their relationships and reexamine what it really means for people to take care of other people.And since then, of course, Ive come to learn that many people have had similar experiences in their first encounters with AI. And in fact, Ive come to think of this as, somewhat tongue in cheek, the nine stages of AI grief. And they actually relate to what well try to address in this new series of conversations.For me, the first time that Greg Brockman and Sam Altman presented what we now know of as OpenAIs GPT-4 to me, they made some claims about what it could do. And my first reaction was one of skepticism, and it seemed that the claims that were being made just couldnt be true. Then that, kind of, passed into, I would say, a period of annoyance because I started to see my colleagues here in Microsoft Research start to show some amazement about the technology. I actually was annoyed because I felt they were being duped by this technology. So thats the second phase. And then, the third phase was concern and maybe even a little bit of frustration because it became clear that, as a company here at Microsoft, we were on the verge of making a big bet on this new technology. And that was concerning to me because of my fundamental skepticism. But then I got my hands on the technology myself. And that enters into a fourth stage, of amazement. You start to encounter things that just are fundamentally amazing. This leads to a period of intensity because I immediately surmised that, wow, this could really change everything and in very few areas other than healthcare would be more important areas of change. And that is stage five, a period of serious intensity where youre just losing sleep and working so hard to try to imagine what this all could mean. Running as many experiments as you can; trying to lean on as much real expertise as possible. You then lead from there into a period of what I call chagrin because as amazing as the technology is, actually understanding how to harness it in real life is not easy. You finally get into this stage of what I would call enlightenment. [MUSIC] And I wont claim to be enlightened. But it is, sort of, a combination of acceptance that we are in a new world today, that things are happening for real, and that theres, sort of, no turning back. And at that point, I think we can really get down to work. And so as we think about really the ultimate purpose of this series of conversations that were about to have, its really to help people get to that stage of enlightenment, to really, kind of, roll up our sleeves, to sit down and think through all of the best knowledge and experience that weve gathered over the last two years, and chart the future of this AI revolution in medicine.[MUSIC TRANSITIONS TO SERIES THEME]Lets get going. [MUSIC FADES]
    0 Commentaires ·0 Parts ·83 Vue
  • Advancing biomedical discovery: Overcoming data challenges in precision medicine
    www.microsoft.com
    IntroductionModern biomedical research is driven by the promise of precision medicinetailored treatments for individual patients through the integration of diverse, large-scale datasets. Yet, the journey from raw data to actionable insights is fraught with challenges. Our team of researchers at Microsoft Research in the Health Futures group, in collaboration with the Perelman School of Medicine at the University of Pennsylvania (opens in new tab), conducted an in-depth exploration of these challenges in a study published in Nature Scientific Reports. The goal of this research was to identify pain points in the biomedical data lifecycle and offer actionable recommendations to enable secure data-sharing, improved interoperability, robust analysis, and foster collaboration across the biomedical research community.Study at a glanceA deep understanding of the biomedical discovery process is crucial for advancing modern precision medicine initiatives. To explore this, our study involved in-depth, semi-structured interviews with biomedical research professionals spanning various roles including bench scientists, computational biologists, researchers, clinicians, and data curators. Participants provided detailed insights into their workflows, from data acquisition and curation to analysis and result dissemination. We used an inductive-deductive thematic analysis to identify key challenges occurring at each stage of the data lifecyclefrom raw data collection to the communication of data-driven findings.Some key challenges identified include:Data procurement and validation: Researchers struggle to identify and secure the right datasets for their research questions, often battling inconsistent quality and manual data validation.Computational hurdles: The integration of multiomic data requires navigating disparate computational environments and rapidly evolving toolsets, which can hinder reproducible analysis.Data distribution and collaboration: The absence of a unified data workflow and secure sharing infrastructure often leads to bottlenecks when coordinating between stakeholders across university labs, pharmaceutical companies, clinical settings, and third-party vendors.Main takeaways and recommendations:Establishing a unified biomedical data lifecycleThis study highlights the need for a unified process that spans all phases of the biomedical discovery processfrom data-gathering and curation to analysis and dissemination. Such a data jobs-to-be-done framework would streamline standardized quality checks, reduce manual errors such as metadata reformatting, and ensure that the flow of data across different research phases remains secure and consistent. This harmonization is essential to accelerate research and build more robust, reproducible models that propel precision medicine forward.Empowering stakeholder collaboration and secure data sharingEffective biomedical discovery requires collaboration across multiple disciplines and institutions. A key takeaway from our interviews was the critical importance of collaboration and trust among stakeholders. Secure, user-friendly platforms that enable real-time data sharing and open communication among clinical trial managers, clinicians, computational scientists, and regulators can bridge the gap between isolated research silos. As a possible solution, by implementing centralized cloud-based infrastructures and democratizing data access, organizations can dramatically reduce data handoff issues and accelerate scientific discovery.Adopting actionable recommendations to address data pain pointsBased on the insights from this study, the authors propose a list of actionable recommendations such as:Creating user-friendly platforms to transition from manual (bench-side) data collection to electronic systems.Standardizing analysis workflows to facilitate reproducibility, including version control and the seamless integration of notebooks into larger workflows.Leveraging emerging technologies such as generative AI and transformer models for automating data ingestion and processing of unstructured text.If implemented, the recommendations from this study would help forge a reliable, scalable infrastructure for managing the complexity of biomedical data, ultimately advancing research and clinical outcomes.At Microsoft Research, we believe in the power of interdisciplinarity and innovation. This study not only identifies the critical pain points that have slowed biomedical discovery but also illustrates a clear path toward improved data integrity, interoperability, and collaboration. By uniting diverse stakeholders around a common, secure, and scalable data research lifecycle, we edge closer to realizing individualized therapeutics for every patient.We encourage our colleagues, partners, and the broader research community to review the full study and consider these insights as key steps toward a more integrated biomedical data research infrastructure. The future of precision medicine depends on our ability to break down data silos and create a research data lifecycle that is both robust and responsive to the challenges of big data.Explore the full paper (opens in new tab) in Nature Scientific Reports to see how these recommendations were derived, and consider how they might integrate into your work. Lets reimagine biomedical discovery togetherwhere every stakeholder contributes to a secure, interoperable, and innovative data ecosystem that transforms patient care.We look forward to engaging with the community on these ideas as we continue to push the boundaries of biomedical discovery at Microsoft Research.Access the full paperOpens in a new tab
    0 Commentaires ·0 Parts ·109 Vue
  • Magma: A foundation model for multimodal AI agents across digital and physical worlds
    www.microsoft.com
    Imagine an AI system capable of guiding a robot to manipulate physical objects as effortlessly as it navigates software menus. Such seamless integration of digital and physical tasks has long been the stuff of science fiction.Today, Microsoft researchers are bringing that vision closer to reality with Magma (opens in new tab), a multimodal AI foundation model designed to process information and generate action proposals across both digital and physical environments. It is designed to enable AI agents to interpret user interfaces and suggest actions like button clicks, while also orchestrating robotic movements and interactions in the physical world. Built on the foundation model paradigm, Magma is pretrained on an expansive and diverse dataset, allowing it to generalize better across tasks and environments than smaller, task-specific models. As illustrated in Figure 1, Magma synthesizes visual and textual inputs to generate meaningful actionswhether executing a command in software or grabbing a tool in the physical world. This new model represents a significant step toward AI agents that can serve as versatile, general-purpose assistants.Figure 1: Magma is one of the first foundation models that is capable of interpreting and grounding multimodal inputs within both digital and physical environments. Given a described goal, Magma can formulate plans and execute actions to achieve it. By effectively transferring knowledge from freely available visual and language data, Magma bridges verbal, spatial and temporal intelligence to navigate complex tasks and settings.on-demand eventMicrosoft Research Forum Episode 4Learn about the latest multimodal AI models, advanced benchmarks for AI evaluation and model self-improvement, and an entirely new kind of computer for AI inference and hard optimization. Watch on-demandOpens in a new tab Vision-Language-Action (VLA) models integrate visual perception, language comprehension, and action reasoning to enable AI systems to interpret images, process textual instructions, and propose actions. These models bridge the gap between multimodal understanding and real-world interaction. Typically pretrained on large numbers of VLA datasets, they acquire the ability to understand visual content, process language, and perceive and interact with the spatial world, allowing them to perform a wide range of tasks. However, due to the dramatic difference among various digital and physical environments, separate VLA models are trained and used for different environments. As a result, these models struggle to generalize to new tasks and environments outside of their training data. Moreover, most of these models do not leverage pretrained vision-language (VL) models or diverse VL datasets, which hampers their understanding of VL relations and generalizability. Magma, to the best of our knowledge, is one of the first VLA foundation model that can adapt to new tasks in both digital and physical environments, which helps AI-powered assistants or robots understand their surroundings and suggest appropriate actions. For example, it could enable a home assistant robot to learn how to organize a new type of object it has never encountered or help a virtual assistant generate step-by-step user interface navigation instructions for an unfamiliar task. Through Magma, we demonstrate the advantages of pretraining a single VLA model for AI agents across multiple environments while still achieving state-of-the-art results on user interface navigation and robotic manipulation tasks, outperforming previous models that are tailored to these specific domains. On VL tasks, Magma also compares favorably to popular VL models that are trained on much larger datasets.Building a foundation model that spans such different modalities has required us to rethink how we train and supervise AI agents. Magma introduces a novel training paradigm centered on two key innovations: Set-of-Mark (SoM) and Trace-of-Mark (ToM) annotations. These techniques developed by Microsoft Research, imbue the model with a structured understanding of tasks in both user interface navigation and robotic manipulation domains.Set-of-Mark (SoM): SoM is an annotated set of key objects, or interface elements that are relevant to achieving a given goal. For example, if the task is to navigate a web page, the SoM includes all the bounding boxes for clickable user interface elements. In a physical task like setting a table, the SoM could include the plate, the cup, and the position of each item on the table. By providing SoM, we give Magma a high-level hint of what needs attentionthe essential elements of the taskwithout yet specifying the order or method.Figure 2: Set-of-Mark (SoM) for Action Grounding. Set-of-Mark prompting enables effective action grounding in images for both UI screenshot (left), robot manipulation (middle) and human video (right) by having the model predict numeric marks for clickable buttons or robot arms in image space. These marks give Magma a high-level hint of what needs attention the essential elements of the taskTrace-of-Mark (ToM): In ToM we extend the strategy of overlaying marks from static images to dynamic videos, by incorporating tracing lines following object movements over time. While SoM highlights key objects or interface elements relevant to a task, ToM captures how these elements change or move throughout an interaction. For example, in a physical task like moving an object on a table, ToM might illustrate the motion of a hand placing the object and adjusting its position. By providing these temporal traces, ToM offers Magma a richer understanding of how actions unfold, complementing SoMs focus on what needs attention.Figure 3: Trace-of-Mark (ToM) for Action Planning. Trace-of-Mark supervisions for robot manipulation (left) and human action (right). It compels the model to comprehend temporal video dynamics and anticipate future states before acting, while using fewer tokens than next-frame prediction to capture longer temporal horizons and action-related dynamics without ambient distractions.Performance and evaluationZero-shot agentic intelligenceTable 1: Zero-shot evaluation on agentic intelligence. We report the results for pretrained Magma without any domain-specific finetuning. In this experiment, Magma is the only model that can conduct the full task spectrum.Figure 4: Zero-shot evaluation on Google Robots and Bridge with SimplerEnv. Magma shows strong zero-shot cross-domain robustness and demonstrates impressive results in cross-embodiment manipulation simulation tasks.Efficient finetuningTable 2: Efficient finetuning on Mind2Web for web UI navigation.Figure 5: Few-shot finetuning on Widow-X robot (left) and LIBERO (right). Magma achieves a significantly higher average success rate in all task suites. Additionally, removing SoM and ToM during pretraining has a negative impact on model performance. Table 3: Without task-specific data, Magma performs competitively and even outperforms some state-of-the-art approaches such as Video-Llama2 and ShareGPT4Video on most benchmarks, despite using much fewer video instruction tuning data.Relation to broader researchMagma is one component of a much larger vision within Microsoft Research for the future of agentic AI systems. Across various teams and projects at Microsoft, we are collectively exploring how AI systems can detect, analyze, and respond in the world to amplify human capabilities.Earlier this month, we announced AutoGen v0.4, a fully reimagined open-source library for building advanced agentic AI systems. While AutoGen focuses on the structure and management of AI agents, Magma enhances those agents by empowering them with a new level of capability. Developers can already use AutoGen to set up an AI assistant that leverages an LLM for planning and dialogue using conventional LLMs. Now with MAGMA, if developers want to build agents that execute both physical or user interface/browser tasks, that same assistant would call upon Magma to understand the environment, perform reasoning, and take a sequence of actions to complete the task.The reasoning ability of Magma can be further developed by incorporating test-time search and reinforcement learning, as described in ExACT. ExACT shows an approach for teaching AI agents to explore more effectively, enabling them to intelligently navigate their environments, gather valuable information, evaluate options, and identify optimal decision-making and planning strategies.At the application level, we are also exploring new user experience (UX) powered by foundation models for the next generation of agentic AI systems. Data Formulator is a prime example. Announced late last year, Data Formulator, is an AI-driven visualization tool developed by Microsoft Research that translates high-level analytical intents into rich visual representations by handling complex data transformations behind the scenes.Looking ahead, the integration of reasoning, exploration and action capabilities will pave the way for highly capable, robust agentic AI systems.Magma is available on Azure AI Foundry Labs (opens in new tab) as well as on HuggingFace (opens in new tab) with an MIT license. Please refer to the Magma project page (opens in new tab) for more technical details. We invite you to test and explore these cutting-edge agentic model innovations from Microsoft Research.Opens in a new tab
    0 Commentaires ·0 Parts ·116 Vue
  • Exploring the structural changes driving protein function with BioEmu-1
    www.microsoft.com
    From forming muscle fibers to protecting us from disease,proteins play an essential role in almost all biological processes in humans and other life forms alike. There has been extraordinary progress in recent years toward better understanding protein structures using deep learning, enabling the accurate prediction of protein structuresfrom their amino acid sequences. However, predicting a single protein structure from its amino acid sequence is like looking at a single frame of a movieit offers only a snapshot of a highly flexible molecule. Biomolecular Emulator-1 (BioEmu-1) is a deep-learning model that provides scientists with a glimpse into the rich world of different structures each protein can adopt, or structural ensembles, bringing us a step closer to understanding how proteins work. A deeper understanding of proteins enables us to design more effective drugs, as many medications work by influencing protein structures to boost their function or prevent them from causing harm.One way to model different protein structures is through molecular dynamics (MD) simulations. These tools simulate how proteins move and deform over time and are widely used in academia and industry. However, in order to simulate functionally important changes in structure, MD simulations must be run for a long time. This is a computationally demanding task and significant effort has been put into accelerating simulations, going as far as designing custom computer architectures (opens in new tab). Yet, even with these improvements, many proteins remain beyond what is currently possible to simulate and would require simulation times of years or even decades.Enter BioEmu-1 (opens in new tab)a deep learning model that can generate thousands of protein structures per hour on a single graphics processing unit. Today, we are making BioEmu-1 open-source (opens in new tab), following our preprint (opens in new tab) from last December, to empower protein scientists in studying structural ensembles with our model.It provides orders of magnitude greater computational efficiency compared to classical MD simulations, thereby opening the door to insights that have, until now, been out of reach.Spotlight: Microsoft research newsletterMicrosoft Research NewsletterStay connected to the research community at Microsoft.Subscribe todayOpens in a new tab We have enabled this by training BioEmu-1 on three types of data sets: (1) AlphaFold Database (AFDB) (opens in new tab) structures (2) an extensive MD simulation dataset, and (3) an experimental protein folding stability dataset (opens in new tab). Training BioEmu-1 on the AFDB structures is like mapping distinct islands in a vast ocean of possible structures. When preparing this dataset, we clustered similar protein sequences so that BioEmu-1 can recognize that a protein sequence maps to multiple distinct structures. The MD simulation dataset helps BioEmu-1 predict physically plausible structural changes around these islands, mapping out the plethora of possible structures that a single protein can adopt. Finally, through fine-tuning on the protein folding stability dataset, BioEmu-1 learns to sample folded and unfolded structures with the right probabilities.Figure 1: BioEmu-1 predicts diverse structures of LapD protein unseen during training. We sampled structures independently and reordered the samples to create a movie connecting two experimentally known structures.Combining these advances, BioEmu-1 successfully generalizes to unseen protein sequences and predicts multiple structures. In Figure 1, we show that BioEmu-1can predict structures of the LapD protein (opens in new tab) from Vibrio cholerae bacteria, whichcauses cholera.BioEmu-1 predicts structures of LapD when it is bound and unbound with c-di-GMP molecules, both of which are experimentally known but not in the training set.Furthermore, our model offers a view on intermediate structures, which have never been experimentally observed, providing viable hypotheses about how this protein functions. Insights into how proteins function pave the way for further advancements in areas like drug development.Figure 2: BioEmu-1 reproduces the D. E. Shaw research(DESRES) simulation of Protein G accurately with a fraction of the computational cost. On the top, we compare the distributions of structures obtained by extensive MD simulation (left) and independent sampling from BioEmu-1 (right). Three representative sample structures are shown at the bottom.Moreover, BioEmu-1 reproduces MD equilibrium distributions accurately with a tiny fraction of the computational cost. In Figure 2, we compare 2D projections of the structural distribution of D. E. Shaw research (DESRES) simulation of Protein G (opens in new tab) and samples from BioEmu-1. BioEmu-1 reproduces the MD distribution accurately, while requiring 10,000-100,000 times fewer GPU hours.Figure 3: BioEmu-1 accurately predicts protein stability. On the left, we plot the experimentally measured free energy differences G against those predicted by BioEmu-1. On the right, we show a protein in folded and unfolded structures.Furthermore, BioEmu-1 accurately predicts protein stability, which we measure by computing the folding free energiesa way to quantify the ratio between the folded and unfolded states of a protein. Protein stability is an important factor when designing proteins, e.g., for therapeutic purposes. Figure 3 shows the folding free energies predicted by BioEmu-1, obtained by sampling protein structures and counting folded versus unfolded protein structures, compared against experimental folding free energy measurements. We see that even on sequences that BioEmu-1 has never seen during training, the predicted free energy values correlate well with experimental values.Professor Martin Steinegger (opens in new tab) of Seoul National University, who was not part of the study, says With highly accurate structure prediction, protein dynamics is the next frontier in discovery. BioEmu marks a significant step in this direction by enabling blazing-fast sampling of the free-energy landscape of proteins through generative deep learning.We believe that BioEmu-1 is a first step toward generating the full ensemble of structures that a protein can take. In these early days, we are also aware of its limitations. With this open-source release, we hope scientists will start experimenting with BioEmu-1, helping us carve out its potentials and shortcomings so we can improve it in the future. We are looking forward to hearing how it performs on variousproteins you care about.AcknowledgementsBioEmu-1 is the result of highly collaborative team effort at Microsoft Research AI for Science. The full authors: Sarah Lewis, Tim Hempel, Jos Jimnez-Luna, Michael Gastegger, Yu Xie, Andrew Y. K. Foong, Victor Garca Satorras, Osama Abdin, Bastiaan S. Veeling, Iryna Zaporozhets, Yaoyi Chen, Soojung Yang, Arne Schneuing, Jigyasa Nigam, Federico Barbero, Vincent Stimper, Andrew Campbell, Jason Yim, Marten Lienen, Yu Shi, Shuxin Zheng, Hannes Schulz, Usman Munir, Ryota Tomioka, Cecilia Clementi, Frank NoOpens in a new tab
    0 Commentaires ·0 Parts ·125 Vue
  • Ideas: Quantum computing redefined with Chetan Nayak
    www.microsoft.com
    Transcript[TEASER][MUSIC PLAYS UNDER DIALOGUE]CHETAN NAYAK: People sometimes say, well, quantum computers are just going to be like classical computers but faster. And thats not the case. So I really want to emphasize the fact that quantum computers are an entirely different modality of computing. You know, there are certain problems which quantum computers are not just faster at than classical computers but quantum computers can solve and classical computers have no chance of solving.[TEASER ENDS]GRETCHEN HUIZINGA: Youre listening to Ideas, a Microsoft Research Podcast that dives deep into the world of technology research and the profound questions behind the code. Im Gretchen Huizinga. In this series, well explore the technologies that are shaping our future and the big ideas that propel them forward.[MUSIC FADES]My guest today is Dr. Chetan Nayak, a technical fellow of Quantum Hardware at Microsoft Quantum. Under Chetans leadership, the Microsoft Quantum team has published a paper that demonstrates a fundamental operation for a scalable topological quantum computer. The team also announced the creation of the worlds first topoconductormore on that laterand first QPU architecture with a topological core, called the Majorana 1. Chetan Nayak, I cant wait to find out what all of this is welcome to Ideas!CHETAN NAYAK: Thank you. Thanks for having me. And Im excited to tell you about this stuff.HUIZINGA: Well, you have a huge list of accomplishments, accolades, and awardslittle alliteration there. But I want to start by getting to know a bit more about you and what got you there. So specifically, whats your research origin story, as it were? What big idea inspired you to study the smallest parts of the universe?NAYAK: Its a great question. I think if I really have to go back to the origin story, it starts when I was a kid, you know, probably a preteen. And, you know, Id go to bookstores to I know, I guess many of the people listening to this may not know what that is, [LAUGHTER] but there used to be these brick-and-mortar storefronts where they would sell books, physical books, HUIZINGA: Right.NAYAK: and Id go to bookstores to, you know, to buy books to read, you know, fiction. But I would browse through them, and thered be a nonfiction section. And often thered be used books, you know, sometimes used textbooks or used popular science books. And I remember, even though they were bookstores, not libraries, I would spend a lot of time there leafing through books and got exposed toaccidentally exposed toa lot of ideas that I wouldnt otherwise have been. You know, just, sort of, you know, I maybe went there, you know, looking to pick up the next Lord of the Rings book, and while I was there, you know, wander into a book that was sort of explaining the theory of relativity to non-scientists. And I remember leafing through those books and actually reading about Einsteins discoveries, you know, most famously E = mc2, but actually a lot of those books were explaining these thought experiments that Einstein did where he was thinking about, you know, if he were on a train that were traveling at the speed of light, what would light look like to him? [LAUGHTER] Would he catch up to it? You know, and all these incredible thought experiments that he did to try to figure out, you know, to really play around with the basic laws as they were currently understood, of physics, and by, you know, stretching and pulling them and going into extreme taking them to extreme situations, you could either find the flaws in them or in some cases see what the next steps were. And that was, you know, really inspirational to me. I, you know, around the same time, also started leafing through various advanced math books and a little later picked up a book on calculus and started flipping through it, used book with, like, you know, the cover falling apart and the pages starting to fall out. But there was a lot of, you know, accidental discovery of topics through wandering through bookstores, actually. I also, you know, went to this great magnet high school in New York City called Stuyvesant High School, where I was surrounded by people who were really interested in science and math and technology. So I think, you know, for me, that origin story really starts, you know, maybe even earlier, but at least in my preteen years when, you know, I went through a process of learning new things and trying to understand them in my own way. And the more you do that, eventually you find maybe youre understanding things in a little different way than anybody else ever did. And then pretty soon, you know, youre discovering things that no ones ever discovered before. So thats, sort of, how it started.HUIZINGA: Yeah. Well, I want to drill in a little bit there because youve brought to mind a couple of images. One is from a Harry Potter movie, And the Half-Blood Prince, where he discovers the potions handbook, but its all torn up and they were fighting about who didnt get that book. And it turned out to be so theres you in a bookstore somewhere between the sci-fi and the non-fi, shall we call it. And youre, kind of, melding the two together. And I love how you say, I was accidentally exposed. [LAUGHTER] Sounds kind of like radiation of some kind and youve turned into a scientist. A little bit more on that. This idea of quantum, because youve mentioned Albert Einstein, theres quantum physics, quantum mechanics, now quantum computing. Do these all go together? I mean, what came out of what in that initial, sort of, exploration with you? Where did you start getting interested in the quantum of things?NAYAK: Yeah, so I definitely started with relativity, not quantum. That was the first thing I heard about. And I would say in a lot of ways, thats the easier one. I mean, those are the two big revolutions in physics in the 20th century, relativity and quantum theory, and quantum mechanics is by far, at least for me and for many people, the harder one to get your head around because it is so counterintuitive. Quantum mechanics in some sense, or quantum theory in some sense, for most of what we experience in the world is down many abstraction layers away from what we experience. What I find amazing is that the people who created, you know, discovered quantum mechanics, they had nothing but the equations to guide them. You know, they didnt really understand what they were doing. They knew that there were some holes or gaps in the fundamental theory, and they kind of stumbled into these equations, and they gave the right answers, and they just had to follow it. I was actually just a few weeks ago, I was in Arosa, which is a small Swiss town in the Alps. Thats actually the town where Schrdinger discovered Schrdingers equation.HUIZINGA: No!NAYAK: Yeah, a hundred years ago, this summer HUIZINGA: Amazing!NAYAK: So Schrdinger suffered tuberculosis, which eventually actually killed him much later in his life. And so he went into the mountains HUIZINGA: for the cure.NAYAK: for his health, yeah, to a sanatorium to recover from tuberculosis. And while he was there in Arosa, he discovered his equation. And its a remarkable story because, you know, that equation, he didnt even know what the equation meant. He just knew, well, particles are waves, and waves have wave equations. Because thats ultimately Maxwells equation. You can derive wave equations for light waves and radio waves and microwaves, x-rays. And he said, you know, there has to be a wave equation for this thing and this wave equation needs to somehow correctly predict the energy levels in hydrogen.HUIZINGA: Oh, my gosh.NAYAK: And he, you know, worked out this equation and then solved it, which is for that time period not entirely trivial. And he got correctly the energy levels of hydrogen, which people had the spectra, the different wavelengths of light that hydrogen emits. And lo and behold, it works. He had no idea why. No idea what it even meant. And, um, but knew that he was onto something. And then remarkably, other people were able to build on what hed done, were able to say, no, there must be a grain of truth here, if not the whole story, and lets build on this, and lets make something that is richer and encompasses more and try to understand the connections between this and other things. And Heisenberg was, around the same time, developing his whats called matrix mechanics, a different way of thinking about quantum computing, and then people realize the connections between those, like Dirac. So its a remarkable story how people, how scientists, took these things they understood, you know, imposed on it a certain level of mathematical consistency and a need for the math to predict things that you could observe, and once you had, sort of, the internal mathematical consistency and it was correctly explaining a couple of data points about the world, you could build this huge edifice based on that. And so that was really impressive to me as I learned that. And thats 100 years ago! It was 1925.HUIZINGA: Right. Well, let me NAYAK: And thats quantum mechanics!HUIZINGA: OK.NAYAK: Youre probably going to say, well, how does quantum computing fit into this, you know? [LAUGHTER] Right? And thats a much later development. People spent a long time just trying to understand quantum mechanics, extend it, use it to understand more things, to understand, you know, other particles. So it was initially introduced to understand the electron, but you could understand atoms, molecules, and subatomic things and quarks and positrons. So there was a rich, you know, decades of development and understanding, and then eventually it got combined with relativity, at least to some extent. So there was a lot to do there to really understand and build upon the early discoveries of quantum mechanics. One of those directions, which was kicked off by Feynman around, I think, 1982 and independently by a Russian mathematician named Yuri Manin was, OK, great, you know, todays computers, again, is many abstraction layers away from anything quantum mechanical, and in fact, its sort of separated from the quantum world by many classical abstraction layers. But what if we built a technology that didnt do that? Like, thats a choice. It was a choice. It was a choice that was partially forced on us just because of the scale of the things we could build. But as computers get smaller and smaller and the way Moores law is heading, you know, at some point, youre going to get very close to that point at which you cannot abstract away quantum mechanics, [LAUGHTER] where you must deal with quantum mechanics, and its part and parcel of everything. You are not in the fortunate case where, out of quantum theory has emerged the classical world that behaves the way we expect it to intuitively. And, you know, once we go past that, that potentially is really catastrophic and scary because, you know, youre trying to make things smaller for the sake of, you know, Moores law and for making computers faster and potentially more energy efficient. But, you know, if you get down to this place where the momentum and position of things, of the electrons, you know, or of the currents that youre relying on for computation, if theyre not simultaneously well-defined, how are you going to compute with that? It looks like this is all going to break down. And so it looks like a real crisis. But, you know, what they realized and what Feynman realized was actually its an opportunity. Its actually not just a crisis. Because if you do it the right way, then actually it gives you way more computational power than you would otherwise have. And so rather than looking at it as a crisis, its an opportunity. And its an opportunity to do something that would be otherwise unimaginable.HUIZINGA: Chetan, you mentioned a bunch of names there. I have to say I feel sorry for Dr. Schrdinger because most of what hes known for to people outside your field is a cat, a mysterious cat in a box, meme after meme. But youve mentioned a number of really important scientists in the field of quantum everything. I wonder, who are your particular quantum heroes? Are there any particular, sort of, modern-day 21st-century or 20th-century people that have influenced you in such a way that its like, I really want to go deep here?NAYAK: Well, definitely, you know, the one person I mentioned, Feynman, is later, so hes the second wave, you could say, of, OK, so if the first wave is like Schrdinger and Heisenberg, and you could say Einstein was the leading edge of that first wave, and Planck. But and the second wave, maybe youd say is, is, I dont know, if Dirac is first or second wave. You might say Dirac is second wave and potentially Landau, a great Russian physicist, second wave. Then maybe Feynmans the third wave, I guess? Im not sure if hes second or third wave, but anyway, hes post-war and was really instrumental in the founding of quantum computing as a field. He had a famous statement, which is, you know, in his lectures, Theres always room at the bottom. And, you know, what he was thinking about there was, you can go to these extreme conditions, like very low temperatures and in some cases very high magnetic fields, and new phenomena emerge when you go there, phenomena that you wouldnt otherwise observe. And in a lot of ways, many of the early quantum theorists, to some extent, were extreme reductionists because, you know, they were really trying to understand smaller and smaller things and things that in some ways are more and more basic. At the same time, you know, some of them, if not all of them, at the same time held in their mind the idea that, you know, actually, more complex behaviors emerge out of simple constituents. Einstein famously, in his miracle year of 1905, one of the things he did was he discovered he proposed the theory of Brownian motion, which is an emergent behavior that relies on underlying atomic theory, but it is several layers of abstraction away from the underlying atoms and molecules and its a macroscopic thing. So Schrdinger famously, among the other things, hes the person who came up with the concept of entanglement HUIZINGA: Yes.NAYAK: in understanding his theory. And for that matter, Schrdingers cat is a way to understand the paradoxes that occur when the classical world emerges from quantum mechanics. So they were thinking a lot about how these really incredible, complicated things arise or emerge from very simple constituents. And I think Feynman is one those people who really bridged that as a post-war scientist because he was thinking a lot about quantum electrodynamics and the basic underlying theory of electrons and photons and how they interact. But he also thought a lot about liquid helium and ultimately about quantum computing. Motivation for him in quantum computing was, you have these complex systems with many underlying constituents and its really hard to solve the equation. The equations are basically unsolvable.HUIZINGA: Right.NAYAK: Theyre complicated equations. You cant just, sort of, solve them analytically. Schrdinger was able to do that with his equation because it was one electron, one proton, OK. But when you have, you know, for a typical solid, youll have Avogadros number of electrons and ions inside something like that, theres no way youre going to solve that. And what Feynman recognized, as others did, really, coming back to Schrdingers observation on entanglement, is you actually cant even put it on a computer and solve a problem like that. And in fact, its not just that with Avogadros number you cant; you cant put it on a computer and solve it with a thousand, you know, [LAUGHTER] atoms, right? And actually, you arent even going to be able to do it with a hundred, right. And when I say you cant do that on a computer, its not that, well, datacenters are getting bigger, and were going to have gigawatt datacenters, and then thats the point at which well be able to seeno, the fact is the amazing thing about quantum theory is if, you know, you go from, lets say, youre trying to solve a problem with 1,000 atoms in it. You know, if you go to 1,001, youre doubling the size of the problem. As far as if you were to store it on a cloud, just to store the problem on the classical computer, just to store the answer, I should say, on a classical computer, youd have to double the size. So theres no chance of getting to 100, even if, you know, with all the buildout of datacenters thats happening at this amazing pace, which is fantastic and is driving all these amazing advances in AI, that buildout is never going to lead to a classical computer that can even store the answer to a difficult quantum mechanical problem.HUIZINGA: Yeah, so basically in answer to the who are your quantum heroes, youve kind of given us a little history of quantum computing, kind of, the leadup and the questions that prompted it. So well get back to that in one second, because I want you to go a little bit further on where we are today. But before we do that, youve also alluded to something thats super interesting to me, which is in light of all the recent advances and claims in AI, especially generative AI, that are making claims like well be able to shorten the timeline on scientific discovery and things like that, why then, do we need quantum computing? Why do we need it?NAYAK: Great question, so at least AI is AI and machine learning, at least so far, is only as good as the training data that you have for it. So if you train AI on all the data we have, and if you train AI on problems we can solve, which at some level are classical, you will be able to solve classical problems. Now, protein folding is one of those problems where the solution is basically classical, very complicated and difficult to predict but basically classical, and there was a lot of data on it, right. And so it was clearly a big data problem thats basically classical. As far as we know, theres no classical way to simulate or mimic quantum systems at scale, that theres a clean separation between the classical and quantum worlds. And so, you know, that the quantum theory is the fundamental theory of the world, and there is no hidden classical model that is lurking [LAUGHTER] in the background behind it, and people sometimes would call these things like hidden variable theories, you know, which Einstein actually really was hoping, late in his life, that there was. That there was, hiding behind quantum mechanics, some hidden classical theory that was just obscured from our view. We didnt know enough about it, and the quantum thing was just our best approximation. If thats true, then, yeah, maybe an AI can actually discover that classical theory thats hiding behind the quantum world and therefore would be able to discover it and answer the problems we need to answer. But thats almost certainly not the case. You know, theres just so much experimental evidence about the correctness of quantum mechanics and quantum theory and many experiments that really, kind of, rule out many aspects of such a classical theory that I think were fairly confident there isnt going to be some classical approximation or underlying theory hiding behind quantum mechanics. And therefore, an AI model, which at the end of the day is some kind of very large matrixyou know, a neural network is some very large classical model obeying some very classical rules about, you take inputs and you produce outputs through many layersthat thats not going to produce, you know, a quantum theory. Now, on the other hand, if you have a quantum computer and you can use that quantum computer to train an AI model, then the AI model is learningyoure teaching it quantum mechanicsand at least within a certain realm of quantum problems, it can interpolate what weve learned about quantum mechanics and quantum problems to solve new problems that, you know, you hadnt already solved. Actually, you know, like I said, in the early days, I was reading these books and flipping through these bookstores, and Id sometimes figure out my own ways to solve problems different from how it was in the books. And then eventually I ended up solving problems that hadnt been solved. Well, thats sort of what an AI does, right? It trains off of the internet or off of playing chess against itself many times. You know, it learns and then takes that and eventually by learning its own way to do things, you know, it learns things that we as humans havent discovered yet.HUIZINGA: Yeah.NAYAK: And it could probably do that with quantum mechanics if it were trained on quantum data. So, but without that, you know, the world is ultimately quantum mechanical. Its not classical. And so something classical is not going to be a general-purpose substitute for quantum theory.HUIZINGA: OK, Chetan, this is fascinating. And as youve talked about pretty well everything so far, thats given us a really good, sort of, background on quantum history as we know it in our time. Talk a little bit about where we are now, particularlyand were going get into topology in a minute, topological stuffbut I want to know where you feel like the science is now, and be as concise as you can because I really want get to your cool work that were going to talk about. And this question includes, whats a Majorana and why is it important?NAYAK: Yeah. So OK, unfortunately, it wont be that concise an answer. OK, so, you know, early 80s, ideas about quantum computing were put forward. But I think most people thought, A, this is going to be very difficult, you know, to do. And I think, B, it wasnt clear that there was enough motivation. You know, I think Feynman said, yes, if you really want to simulate quantum systems, you need a quantum computer. And I think at that point, people werent really sure, is that the most pressing thing in the world? You know, simulating quantum systems? Its great to understand more about physics, understand more about materials, understand more about chemistry, but we werent even at that stage, I think, there where, hey, thats the limiting thing thats limiting progress for society. And then, secondly, there was also this feeling that, you know, what youre really doing is some kind of analog computing. You know, this doesnt feel digital, and if it doesnt feel digital, theres this question about error correction and how reliable is it going to be. So Peter Shor actually, you know, did two amazing things, one of which is a little more famous in the general public but one of which is probably more important technically, is he did these two amazing things in the mid-90s. He first came up with Shors algorithm, where he said, if you have a quantum computer, yeah, great for simulating quantum systems, but actually you can also factor large numbers. You can find the prime factors of large numbers, and the difficulty of that problem is the underlying security feature under RSA [encryption], and many of these public key cryptography systems rely on certain types of problems that are really hard. Its easy to multiply two large primes together and get the output, and you can use that to encrypt data. But to decrypt it, you need to know those two numbers, and its hard to find those factors. What Peter Shor discovered is that ideally, a quantum computer, an ideal quantum computer, would be really good at this, OK. So that was the first discovery. And at that point, what seemed at the time an academic problem of simulating quantum systems, which seemed like in Feynmans vision, thats what quantum computers are for, that seemingly academic problem, all of a sudden, also, you know, it turns out theres this very important both financially and economically and national security-wise other application of a quantum computer. And a lot of people sat up and took notice at that point. So thats huge. But then theres a second thing that he, you know, discovered, which was quantum error correction. Because everyone, when he first discovered it, said, sure, ideally thats how a quantum computer works. But quantum error correction, you know, this thing sounds like an analog system. How are you going to correct errors? This thing will never work because itll never operate perfectly. Schrdingers problem with the cats going to happen, is that youre going to have entanglement. The thing is going to just end up being basically classical, and youll lose all the supposed gains youre getting from quantum mechanics. And quantum error correction, that second discovery of Peter Shors, really, you know, suddenly made it look like, OK, at least in principle, this thing can happen. And people built on that. Peter Shors original quantum error correction, I would say, it was based on a lot of ideas from classical error correction. Because you have the same problem with classical communication and classical computing. Alexei Kitaev then came up with, you know, a new set of quantum error correction procedures, which really dont rely in the same way on classical error correction. Or if they do, its more indirect and in many ways rely on ideas in topology and physics. And, you know, those ideas, which lead to quantum error correcting codes, but also ideas about what kind of underlying physical systems would have built-in hardware error protection, led to what we now call topological quantum computing and topological qubits, because its this idea that, you know, just like people went from the early days of computers from vacuum tubes to silicon, actually, initially germanium transistors and then silicon transistors, that similarly that you had to have the right underlying material in order to make qubits.HUIZINGA: OK.NAYAK: And that the right underlying material platform, just as for classical computing, its been silicon for decades and decades, it was going to be at one of these so-called topological states of matter. And that these would be states of matter whose defining feature, in a sense, would be that they protect quantum information from errors, at least to some extent. Nothings perfect, but, you know, in a controllable way so that you can make it better as needed and good enough that any subsequent error correction that you might call software-level error correction would not be so cumbersome and introduce so much overhead as to make a quantum computer impractical. I would say, you know, there were these the field had a, I would say, a reboot or a rebirth in the mid-1990s, and pretty quickly those ideas, in addition to the applications and algorithms, you know, coalesced around error correction and whats called fault tolerance. And many of those ideas came, you know, freely interchanged between ideas in topology and the physics of what are called topological phases and, you know, gave birth to this, I would say, to the set of ideas on which Microsofts program has been based, which is to look for the right material create the right material and qubits based on it so that you can get to a quantum computer at scale. Because theres a number of constraints there. And the work that were really excited about right now is about getting the right material and harnessing that material for qubits.HUIZINGA: Well, lets talk about that in the context of this paper that youre publishing and some pretty big news in topology. You just published a paper in Nature that demonstrateswith receiptsa fundamental operation for a scalable topological quantum computer relying on, as I referred to before, Majorana zero modes. Thats super important. So tell us about this and why its important.NAYAK: Yeah, great. So building on what I was just saying about having the right material, what were relying on is, to an extent, is superconductivity. So thats one of the, you know, really cool, amazing things about the physical world. That many metals, including aluminum, for instance, when you cool them down, theyre able to carry electricity with no dissipation, OK. No energy loss associated with that. And that property, the remarkable that property, what underlies it is that the electrons form up into pairs. These things called Cooper pairs. And those Cooper pairs, their wave functions kind of lock up and go in lockstep, and as a result, actually the number of them fluctuates wildly, you know, in any place locally. And that enables them to, you know, to move easily and carry current. But also, a fundamental feature, because they form pairs, is that theres a big difference between an even and odd number of electrons. Because if theres an odd electron, then actually theres some electron thats unpaired somewhere, and theres an energy penalty associated, an energy cost to that. It turns out that thats not always true. Theres actually a subclass of superconductors called topological superconductors, or topoconductors, as we call them, and topoconductors have this amazing property that actually theyre perfectly OK with an odd number of electrons! In fact, when theres an odd number of electrons, there isnt any unpaired electron floating around. But actually, topological superconductors, they dont have that. Thats the remarkable thing about it. Ive been warned not to say what Im about to say, but Ill just go ahead [LAUGHTER] and say it anyway. I guess thats bad way to introduce something HUIZINGA: No, its actually really exciting!NAYAK: OK, but since you brought up, you know, Harry Potter and the Half-Blood Prince, you know, Voldemort famously split his soul into seven or, I guess, technically eight, accidentally. [LAUGHTER] He split his soul into seven Horcruxes, so in some sense, there was no place where you could say, well, thats where his soul is.HUIZINGA: Oh, my gosh!NAYAK: So Majorana zero modes do kind of the same thing! Like, theres this unpaired electron potentially in the system, but you cant find it anywhere. Because to an extent, youve actually figured out a way to split it and put it you know, sometimes we say like you put it at the two ends of the system, but thats sort of a mathematical construct. The reality is there is no place where that unpaired electron is!HUIZINGA: Thats crazy. Tell me, before you go on, were talking about Majorana. I had to look it up. Thats a guys name, right? So do a little dive into what this whole Majorana zero mode is.NAYAK: Yeah, so Majorana was an Italian physicist, or maybe technically Sicilian physicist. He was very active in the 20s and 30s and then just disappeared mysteriously around 1937, 38, around that time. So no one knows exactly what happened to him. You know, but one of his last works, which I think may have only been published after he disappeared, he proposed this equation called the Majorana equation. And he was actually thinking about neutrinos at the time and particles, subatomic particles that carry no charge. And so, you know, he was thinking about something very, very different from quantum computing, actually, right. So Majoranadidnt know anything about quantum computing, didnt know anything about topological superconductors, maybe even didnt know much about superconductivity at allwas thinking about subatomic particles, but he wrote down this equation for neutral objects, or some things that dont carry any charge. And so when people started, you know, in the 90s and 2000s looking at topological superconductors, they realized that there are these things called Majorana zero modes. So, as I said, and let me explain how they enter the story, so Majorana zero modes are I just said that topological superconductors, theres no place you can find that even or odd number of electrons. Theres no penalty. Now superconductors, they do have a penaltyand its called the energy gapfor breaking a pair. Even topological superconductors. You take a pair, a Cooper pair, you break it, you have to pay that energy cost, OK. And its, like, double the energy, in a sense, of having an unpaired electron because youve created two unpaired electrons and you break that pair. Now, somehow a topological superconductor has to accommodate that unpaired electron. It turns out the way it accommodates it is it can absorb or emit one of these at the ends of the wire. If you have a topological superconductor, a topoconductor wire, at the ends, it can absorb or emit one of these things. And once it goes into one end, then its totally delocalized over the system, and you cant find it anywhere. You can say, oh, it got absorbed at this end, and you can look and theres nothing you can tell. Nothing has changed about the other end. Its now a global property of the whole thing that you actually need to somehow figure out, and Ill come to this, somehow figure out how to connect the two ends and actually measure the whole thing collectively to see if theres an even or odd number of electrons. Which is why its so great as a qubit because the reason its hard for Schrdingers cat to be both dead and alive is because youre going to look at it, and then you look at it, photons are going to bounce off it and youre going to know if its dead or alive. And the thing is, the thing that was slightly paradoxical is actually a person doesnt have to perceive it. If theres anything in the environment that, you know, if a photon bounces off, its sort of like if a tree falls in the forest HUIZINGA: I was just going to say that!NAYAK: it still makes a sound. I know! It still makes a sound in the sense that Schrdingers cat is still going to be dead or alive once a photon or an air molecule bounces off it because of the fact that its gotten entangled with, effectively, the rest of the universe you know many other parts of the universe at that point. And so the fact that there is no place where you can go and point to that unpaired electron means it does that even or oddness which we call parity, whether somethings even or odd is parity. And, you know, these are wires with, you know, 100 million electrons in them. And its a difference between 100 million and 100 million and one. You know, because ones an even or odd number. And that difference, you have to be able to, like, the environment cant detect it. So it doesnt get entangled with anything, and so it can actually be dead and alive at the same time, you know, unlike Schrdingers cat, and thats what you need to make a qubit, is to create those superpositions. And so Majorana zero modes are these features of the system that actually dont actually carry an electrical charge. But they are a place where a single unpaired electron can enter the system and then disappear. And so they are this remarkable thing where you can hide stuff. [LAUGHS]HUIZINGA: So how does that relate to your paper and the discoveries that youve made here?NAYAK: Yeah, so in an earlier paper so now the difficulty is you have to actually make this thing. So, you know, you put a lot of problems up front, is that youre saying, OK, the solution to our problem is we need this new material and we need to harness it for qubits, right. Great. Well, where are we going to get this material from, right? You might discover it in nature. Nature may hand it to you. But in many cases, it doesnt. And thats this is one of those cases where we actually had to engineer the material. And so engineering the material is, it turns out to be a challenge. People had ideas early on that they could put some combination of semiconductors and superconductors. But, you know, for us to really make progress, we realized that, you know, its a very particular combination. And we had to developand we did developsimulation capabilities, classical. Unfortunately, we dont have a quantum computer, so we had to do this classically with classical computers. We had to classically simulate various kinds of materials combinations to find one, or find a class, that would get us into the topological phase. And it turned out lots of details mattered there, OK. It involves a semiconductor, which is indium arsenide. Its not silicon, and its not the second most common semiconductor, which is gallium nitride, which is used in LED lights. Its something called indium arsenide. It has some uses as an infrared detector, but its a different semiconductor. And were using it in a nonstandard way, putting it into contact with aluminum and getting, kind of, the best of both worlds of a superconductor and a semiconductor so that we can control it and get into this topological phase. And thats a previously published paper in American Physical [Society] journal. But thats great. So that enables that shows that you can create this state of matter. Now we need to then build on it; we have to harness it, and we have to, as I said, we have to make one of these wires or, in many cases, multiple wires, qubits, et cetera, complex devices, and we need to figure out, how do we measure whether we have 100 million or 100 million and one electrons in one of these wires? And that was the problem we solved, which is we made a device where we took something called a quantum dotyou should think of [it] as a tiny little capacitorand that quantum dot is coupled to the wire in such a way that the coupling that an electronits kind of remarkablean electron can quantum mechanically tunnel from you know, this is like an electron, you dont know where it is at any given time. You know, its momentum and its position arent well defined. So its, you know, an electron whose, lets say, energy is well defined actually, there is some probability amplitude that its on the wire and not on the dot. Even though it should be on the dot, it actually can, kind of, leak out or quantum mechanically end up on the wire and come back. And because of that factthe simple fact that its quantum mechanical wave function can actually have it be on the wireit actually becomes sensitive to that even or oddness.HUIZINGA: Interesting.NAYAK: And that causes a small change in the capacitance of this tiny little parallel plate capacitor, effectively, that we have. And that tiny little change in capacitance, which is, just to put into numbers, is the femtofarad, OK. So thats a decimal point followed by, you know, 15 zeros and a one 14 zeros and a one. So thats how tiny it is. That that tiny change in the capacitance, if we put it into a larger resonant circuit, then that larger resonant circuit shows a small shift in its resonant frequency, which we can detect. And so what we demonstrated is we can detect the difference, that one electron difference, that even or oddness, which is, again, its not local property of anywhere in the wire, that we can nevertheless detect. And thats, kind of, the fundamental thing you have to have if you want to be able to use these things for quantum information processing, you know, this parity, you have to be able to measure what that parity is, right. Thats a fundamental thing. Because ultimately, the information you need is classical information. Youre going to want to know the answer to some problem. Its going to be a string of zeros and ones. You have to measure that. But moreover, the particular architecture were using, the basic operations for us are measurements of this type, which is a its a very digital process. The process I mentioned, sort of, how quantum computing looks a little analog in some ways, but its not really analog. Well, thats very manifestly true in our architecture, that our operations are a succession of measurements that we turn on and off, but different kinds of measurements. And so what the paper shows is that we can do these measurements. We can do them fast. We can do them accurately.HUIZINGA: OK.NAYAK: And the additional, you know, announcements that were making, you know, right now are work that weve done extending and building on that with showing additional types of measurements, a scalable qubit design, and then building on that to multi-qubit arrays.HUIZINGA: Right.NAYAK: So that really unlocked our ability to do a number of things. And I think you can see the acceleration now with the announcements we have right now.HUIZINGA: So, Chetan, youve just talked about the idea of living in a classical world and having to simulate quantum stuff.NAYAK: Yup.HUIZINGA: Tell us about the full stack here and how we go from, in your mind, from quantum computing at the bottom all the way to the top.NAYAK: OK, so one thing to keep in mind is quantum computers are not a general-purpose accelerator for every problem. You know, so people sometimes say, well, quantum computers are just going to be like classical computers but faster. And thats not the case. So I really want to emphasize the fact that quantum computers are an entirely different modality of computing. You know, there are certain problems which quantum computers are not just faster at than classical computers but quantum computers can solve and classical computers have no chance of solving. On the other hand, there are lots of things that classical computers are good at that quantum computers arent going to be good at, because its not going to give you any big scale up. Like a lot of big data problems where you have lots of classical data, you know, a quantum computer with, lets say, lets call it 1,000 qubits, and here I mean 1,000 logical qubits, and we come back to what that means, but 1,000 error-corrected qubits can solve problems that you have no chance of solving with a classical computer, even with all the worlds computing. But in fact, if it were a 1,000 qubits, you would have to take every single atom in the entire universe, OK, and turn that into a transistor, and it still wouldnt be big enough. You dont have enough bytes, even if every single atom in the universe were a byte. So thats how big these quantum problems are when you try to store them on a classical computer, just to store the answer, lets say.HUIZINGA: Yeah.NAYAK: But conversely, if you have a lot of classical data, like all the data in the internet, which we train, you know, our AI models with, you cant store that on 1,000 qubits, right. You actually cant really store more than 1,000 bits of classical information on 1,000 qubits. So many things that we have big data in classically, we dont have the ability to really, truly store within a quantum computer in a way that you can do anything with it. So we should definitely not view quantum computers as replacing classical computers. Theres lots of things that classical computers are already good at and were not trying to do those things. But there many things that classical computers are not good at all. Quantum computer we should think of as a complimentary thing, an accelerator for those types of problems. It will have to work in collaboration with a classical computer that is going to do the classical steps, and the quantum computer will do the quantum steps. So thats one thing to just keep in mind. When we talk about a quantum computer, it is part of a larger computing, you know, framework where there are many classical elements. It might be CPUs, it might be GPUs, might be custom ASICs for certain things, and then quantum computer, you know, a quantum processor, as well. So HUIZINGA: Is that called a QPU?NAYAK: A QPU is the quantum processing unit, exactly! So well have CPUs, GPUs, and QPUs. And so that is, you know, at the lowest layer of that stack, is the underlying substrate, physical substrate. Thats our topoconductor. Its the material which we build our QPUs. Thats the quantum processing unit. The quantum processing unit includes all of the qubits that we have in our architecture on a single chip. And thats, kind of, one of the big key features, key design features, that the qubits be small and small and manufacturable on a single wafer. And then the QPU also has to enable that quantum world to talk to the classical world HUIZINGA: Right.NAYAK: because you have to send it, you know, instructions and you have to get back answers. And for us, that is turning on and off measurements because our instructions are a sequence of measurements. And then, we ultimately have to get back a string of zeros and ones. But that initially is these measurements where were getting, you know, phase shifts on microwaves, and which are in turn telling us about small capacitance shifts, which are in turn telling us the parity of electrons in a wire.HUIZINGA: Right.NAYAK: So really, this is a quantum machine in which, you know, you have the qubits that are built on the quantum plane. Youve then got this quantum-classical interface where the classical information is going in and out of the quantum processor. And then theres a lot of classical processing that has to happen, both to enable error correction and to enable computations. And the whole thing has to be inside of a cryogenic environment. So its a very special environment in which we in which, A, its kept cold because thats what you need in order to have a topoconductor, and thats also what you need in order just in general for the qubits to be very stable. So that when we talk about the full stack, just on the hardware side, there are many layers to this. And then of course, you know, there is the classical firmware that takes instructions and turns them into the physical things that need to happen. And then, of course, we have algorithms and then ultimately applications. HUIZINGA: Yeah, so I would say, Chetan, that people can probably go do their own little research on how you go from temperatures that are lower than deep space to the room youre working in. And we dont have time to unpack that on this show. And also, I was going to ask you what could possibly go wrong if you indeed got everything right. And you mentioned earlier about, you know, what happens in an AI world if we get everything right. If you put quantum and AI together, its an interesting question, what that world looks like. Can you just take a brief second to say that youre thinking about what could happen to cryptography, to, you know, just all kinds of things that we might be wondering about in a post-quantum world?NAYAK: Great question. So, you know, first of all, you know, one of the things I want to, kind of, emphasize is, ultimately, a lot of, you know, when we think about the potential for technology, often the limit comes down to physics. There are physics limits. You know, if you think about, like, interstellar travel and things like that, well, the speed of light is kind of a hard cutoff, [LAUGHTER] and actually, youre not going to be able to go faster than the speed light, and you have to bake that in. That ultimately, you know, if you think of a datacenter, ultimately, like theres a certain amount of energy, and theres a certain amount of cooling power you have. And you can say, well, this datacenter is 100 megawatts, and then in the future, well have a gigawatt to use it. But ultimately, then that energy has to come from somewhere, and youve got some hard physical constraints. So similarly, you could ask, you know, with quantum computers, what are the hard physical constraints? What are the things that just because you cant make a perpetual motion machine; you cant violate, I think, laws of quantum mechanics. And I think in the early days, there was this concern that, you know, this idea relies on violating something. Youre doing something thats not going to work. You know, Id say the theory of quantum error correction, the theory of fault tolerance, you know, many of the algorithms have been developed, they really do show that there is no fundamental physical constraint saying that this isnt going to happen, you know. That, you know, that somehow you would need to have either more power than you can really generate or you would need to go much colder than you can actually get. That, you know, theres no physical, you know, no-go result. So thats an important thing to keep in mind. Now, the thing is, some people might then be tempted to say, well, OK, now its just an engineering problem because we know this in principle can work, and we just have to figure out how to work. But the truth is, there isnt any such, like, hard barrier where you say, well, oh, up until here, its fundamental physics, and then beyond this, its just an engineering problem. The reality is, you know, new difficulties and challenges arise every step along the way. And one person might call it an engineering or an implementation challenge, and one person may call it a fundamental, you know, barrier obstruction, and I think people will probably profitably disagree, you know, agree to disagree on, like, where that goes. I think for us, like, it was really crucial, you know, as we look out at a scale to realize quantum computers are going to really make an impact. Were going to need thousands, you know, hundreds to thousands of logical qubits. That is error-corrected qubits. And when you look at what that means, that means really million physical qubits. That is a very large scale in a world in which people have mostly learned what we know about these things from 10 to 100 qubits. To project out from that to a million, you know, it would surprise me if the solutions that are optimal for 10 to 100 qubits are the same solutions that are optimal for a million qubits, right.HUIZINGA: Yeah.NAYAK: And that has been a motivation for us, is lets try to think, based on what we now know, of things that at least have a chance to work at that million qubit. Lets not do anything that looks like its going to clearly hit a dead end before then.HUIZINGA: Right.NAYAK: Now, obviously in science, nothing is certain, and you learn new things along the way, but we didnt want to start out with things that looked like they were not going to be, you know, work for a million qubits. That was the reason that we developed this new material, that we created this, engineered this new material, you know, these topoconductors, precisely because we said we need to have a material that can give us something where we can operate it fast and make it small and be able to control these things. So, you know, I think thats one key thing. And, you know, what weve demonstrated now is that we can harness this; that weve got a qubit. And thats why we have a lot of confidence that, you know, these are things that arent going to be decades away. That these things are going to be years away. And that was the basis for our interaction with DARPA [Defense Advanced Research Projects Agency]. Weve just been signed a contract with DARPA to go into the next phase of the DARPA US2QC program. And, you know, DARPA, the US government, wants to see a fault-tolerant quantum computer. And because they do not want any surprises.HUIZINGA: Right?!? [LAUGHS]NAYAK: And, you know, there are people out there who said, you know, quantum computers are decades away; dont worry about it. But I think the US government realizes they might be years, not decades away, and they want to get ahead of that. And so thats why theyve entered into this agreement with us and the contract with us.HUIZINGA: Yeah.NAYAK: And so that is, you know, the thing I just want to make sure that, you know, listeners to the podcast understand that we are, you know, the reason that we fundamentally re-engineered, re-architected, what we think a quantum computer should look like and what the qubit should be and even going all the way down to the underlying materials was which is high risk, right? I mean, there was no guarantee theres no guarantee that any of this is going to work, A. And, B, there was no guarantee we would even be able to do the things weve done so far. I mean, you know, thats the nature of it. If youre going to try to do something really different, youre going to have to take risks. And we did take risks by really starting at, you know, the ground floor and trying to redesign and re-engineer these things. So that was a necessary part of this journey and the story, was for us to re-engineer these things in a high-risk way. What that leads to is, you know, potentially changing that timeline. And so in that context, its really important to make this transition to post-quantum crypto because, you know, the cryptography systems in use up until now are things that are not safe from quantum attacks if you have a utility-scale quantum computer. We do know that there are crypto systems which, at least as far as we know, appear to be safe from quantum attacks. Thats whats called post-quantum cryptography. You know, they rely on different types of hard math problems, which quantum computers arent probably good at. And so, you know, and changing over to a new crypto standard isnt something that happens at the flip of a switch.HUIZINGA: No.NAYAK: Its something that takes time. You know, first, you know, early part of that was based around the National Institute of Standards and Technology aligning around one or a few standard systems that people would implement, which they certified would be quantum safe and, you know, those processes have occurred. And so now is the time to switch over. Given that we know that we can do this and that it wont happen overnight, nows the time to make that switch.HUIZINGA: And weve had several cryptographers on the show whove been working on this for years. Its not like theyre just starting. They saw this coming even before you had some solidity in your work. But listen, I would love to talk to you for hours, but were coming to a close here. And as we close, I want to refer to a conversation you had with distinguished university professor Sankar Das Sarma. He suggested that with the emergence of Majorana zero modes, you had reached the end of the beginning and that you were now sort of embarking on the beginning of the end in this work. Well, maybe thats a sort of romanticized vision of what it is. But could you give us a little bit of a hint on what are the next milestones on your road to a scalable, reliable quantum computer, and whats on your research roadmap to reach them?NAYAK: Yeah, so interestingly, we actually just also posted on the arXiv a paper that shows some aspects of our roadmap, kind of the more scientific aspects of our roadmap. And that roadmap is, kind of, continuously going from the scientific discovery phase through the engineering phase, OK. Again, as I said, its a matter of debate and even taste of what exactly you want to call scientific discovery versus engineering, butwhich will be hotly debated, Im surebut it is definitely a continuum thats going more towards from one towards the other. And I would say, you know, at a high level, logical qubits, you know, error-corrected, reliable qubits, are, you know, the basis of quantum computation at scale and developing, demonstrating, and building those logical qubits and logic qubits at scale is kind of a big thing thatfor us and for the whole industryis, I would say, is, sort of, the next level of quantum computing. Jason Zander wrote this blog where he talked about level one, level two, level three, where level one was this NISQnoisy intermediate-scale quantumera; level two is foundations of, you know, reliable and logical qubits; and level three is the, you know, at-scale logical qubits. I think were heading towards level two, and so in my mind, thats sort of, you know, the next North Star is really around that. I think there will be a lot of very interesting and important things that are more technical and maybe are not as accessible to a big audience. But Id say thats, kind of, the I would say, if youre, you know, a thing to keep in mind as a big exciting thing happening in the field.HUIZINGA: Yeah. Well, Chetan Nayak, what a ride this show has been. Im going to be watching this spaceand the timelines thereof because they keep getting adjusted![MUSIC]Thank you for taking time to share your important work with us today.NAYAK: Thank you very much, my pleasure![MUSIC FADES]
    0 Commentaires ·0 Parts ·140 Vue
  • Introducing Muse: Our first generative AI model designed for gameplay ideation
    www.microsoft.com
    Today, the journal Nature (opens in new tab) is publishing our latest research, which introduces the first World and Human Action Model (WHAM). The WHAM, which weve named Muse, is a generative AI model of a video game that can generate game visuals, controller actions, or both.The paper in Nature offers a detailed look at Muse, which was developed by the Microsoft Research Game Intelligence (opens in new tab) and Teachable AI Experiences (opens in new tab)(Tai X) teams in collaboration with Xbox Games Studios Ninja Theory (opens in new tab). Simultaneously, to help other researchers explore these models and build on our work, we are open sourcing the weights and sample data and making the executable available for the WHAM Demonstratora concept prototype that provides a visual interface for interacting with WHAM models and multiple ways of prompting the models. Developers can learn and experiment with the weights, sample data, and WHAM Demonstrator on Azure AI Foundry (opens in new tab).In our research, we focus on exploring the capabilities that models like Muse need to effectively support human creatives. Im incredibly proud of our teams and the milestone we have achieved, not only by showing the rich structure of the game world that a model like Muse can learn, as you see in the video demo below, but also, and even more importantly, by demonstrating how to develop research insights to support creative uses of generative AI models.Generated gameplay examplesExample gameplay sequences generated by Muse (based on WHAM-1.6B) demonstrate that our model can generate complex gameplay sequences that are consistent over several minutes. All examples shown here were generated by prompting the model with 10 initial frames (1 second) of human gameplay and the controller actions of the whole play sequence. Muse is used in world model mode meaning that it is used to predict how the game will evolve from the initial prompt sequence. The more closely the generated gameplay sequence resembles the actual game, the more accurately Muse has captured the dynamics of that game.What motivated this research?As we release our research insights and model today, I keep thinking back to how this all started. There was a key moment back in December 2022 that I remember clearly. I had recently returned from maternity leave, and while I was away the machine learning world had changed in fundamental ways. ChatGPT had been publicly released, and those who had tried it were in awe of OpenAIs technical achievements and the models capabilities. It was a powerful demonstration of what transformer-based generative models could do when trained on large amounts of (text) data. Coming back from leave at that moment, the key question on my mind was, What are the implications of this achievement for our teams work at the intersection of artificial intelligence and video games?A new research opportunity enabled by dataIn our team, we had access to a very different source of data. For years, we had collaborated with Xbox Game Studios Ninja Theory (based in Cambridge, UK, just like our research team) to collect gameplay data from Bleeding Edge, their 2020 Xbox game. Bleeding Edge is a 4-versus-4 game where all games are played online, and matches are recorded if the player agrees to the End User License Agreement (EULA). We worked closely with our colleagues at Ninja Theory and with Microsoft compliance teams to ensure that the data was collected ethically and used responsibly for research purposes.Its been amazing to see the variety of ways Microsoft Research has used the Bleeding Edge environment and data to explore novel techniques in a rapidly moving AI industry, said Gavin Costello, technical director at Ninja Theory. From the hackathon that started it all, where we first integrated AI into Bleeding Edge, to building AI agents that could behave more like human players, to the World and Human Action Model being able to dream up entirely new sequences of Bleeding Edge gameplay under human guidance, its been eye-opening to see the potential this type of technology has.Muse Training DataCurrent Muse instances were trained on human gameplay data (visuals and controller actions) from the Xbox game Bleeding Edge shown here at the 300180 px resolution at which we train current models. Muse (using WHAM-1.6B) has been trained on more than 1 billion images and controller actions, corresponding to over 7 years of continuous human gameplay.The Game Intelligence and Teachable AI Experiences teams playing the Bleeding Edge game together.Until that point in late 2022, we had used Bleeding Edge as a platform for human-like navigation experiments, but we had not yet made meaningful use of the large amount of human player data we now had available. With the powerful demonstration of text-models, the next question was clear: What could we achieve if we trained a transformer-based model on large amounts of human gameplay data?Scaling up model trainingAs the team got to work, some of the key challenges included scaling up the model training. We initially used a V100 cluster, where we were able to prove out how to scale up to training on up to 100 GPUs; that eventually paved the way to training at scale on H100s. Key design decisions we made early focused on how to best leverage insights from the large language model (LLM) community and included choices such as how to effectively represent controller actions and especially images.The first sign that the hard work of scaling up training was paying off came in the form of a demo that thoroughly impressed me. Tim Pearce, at that time a researcher in Game Intelligence, had put together examples of what happened early versus later in training. You can see the demo here its like watching the model learn. This led to our follow-up work showing how scaling laws emerge in these kinds of models.Muse consistency over the course of trainingGround truthHuman gameplayGame visuals generated by Muse with 206M parametersConditioned on 1 second of real gameplay and 9 seconds of actionsCharacter recognizableBasic movements and geometryNo degeneration over timeCorrect interaction with power cellModels flying mechanic correctlyComparing ground truth human gameplay (left) to visuals generated using Muse (using WHAM-206M) when prompted with 1 second of human gameplay (visuals and controller actions) and 9 seconds of controller actions from the ground truth. In this setting, if Muse can generate visuals that closely match the ground truth, then it has captured the game dynamics. We see that the quality of generated visuals improves visibly over the course of training. In early training (10k training updates) we see signs of life, but quality deteriorates quickly. After 100k training updates, the model is consistent over time but does not yet capture relatively less frequent aspects of the game dynamics, such as the flying mechanic. Consistency with the ground truth continues to improve with additional training, e.g., the flying mechanic is captured after 1M training updates.Multidisciplinary collaboration: Involving users from the beginningWe had started to investigate how to evaluate these types of models early on. For example, we wanted to understand the representations learned using linear probing, which was driven by Research Intern Gunshi Gupta and Senior Research Scientist Sergio Valcarcel Macua; to explore online evaluation, driven by Senior Research Scientist Raluca Georgescu; and to generate both visuals and actions, initially termed full dreaming and driven by Research Intern Tarun Gupta. But working through how to systematically evaluate Muse required a much broader set of insights. More importantly, we needed to understand how people might use these models in order to know how to evaluate them.This was where the opportunity for multidisciplinary research became crucial. We had discussed aspects of this work with Senior Principal Research Manager Cecily Morrison and her Teachable AI Experiences team for several months. And we had already partnered on an engagement with game creatives (driven by Cecily, Design Researcher Linda Wen, and Principal Research Software Development Engineer Martin Grayson) to investigate how game creators would like to use generative AI capabilities in their creative practice.It was a great opportunity to join forces at this early stage to shape model capabilities to suit the needs of creatives right from the start, rather than try to retrofit an already developed technology, Cecily said.Linda offered some valuable insights about how we approached the work: Weve seen how technology-driven AI innovation has disrupted the creative industryoften catching creators off guard and leaving many feeling excluded, she said. This is why we invited game creators to help us shape this technology from the start. Recognizing that most AI innovations are developed in the Global North, we also made it a priority to recruit game creators from underrepresented backgrounds and geographies. Our goal was to create a technology that benefits everyonenot just those already in positions of privilege.Unlocking new creative use cases with the WHAM DemonstratorNow, with the models emerging capabilities and user insights in mind, it was time to put all the pieces together. The teams joined forces during a Microsoft internal hackathon to explore new interaction paradigms and creative uses that Muse could unlock. As a result, we developed a prototype that we call the WHAM Demonstrator, which allows users to directly interface with the model.The Global Hackathon was the perfect opportunity for everyone to come together and build our first working prototype, Martin said. We wanted to develop an interface for the WHAM model that would allow us to explore its creative potential and start to test ideas and uses we had learned from our interviews with game developers.WHAM DemonstratorFor interacting with World and Human Action Models like Muse, the WHAM Demonstrator provides a visual interface for interacting with a WHAM instance.In this example, the user is loading a visual as an initial prompt to the model, here a single promotional image for the game Bleeding Edge. They use Muse to generate multiple potential continuations from this starting point.The user explores the generated sequences and can tweak them, for example using a game controller to direct the character. These features demonstrate how Muses capabilities can enable iteration as part of the creative process.Identifying key capabilities and how to evaluate themThe hands-on experience of exploring Muse capabilities with the WHAM Demonstrator, and drawing on insights we gained from the user study, allowed us to systematically identify capabilities that game creatives would require to use generative models like Muse. This in turn allowed us to establish evaluation protocols for three key capabilities: consistency, diversity, and persistency. Consistency refers to a models ability to generate gameplay sequences that respect the dynamics of the game. For example, the character moves consistently with controller actions, does not walk through walls, and generally reflects the physics of the underlying game. Diversity refers to a models ability to generate a range of gameplay variants given the same initial prompt, covering a wide range of ways in which gameplay could evolve. Finally, persistency refers to a models ability to incorporate (or persist) user modifications into generated gameplay sequences, such as a character that is copy-pasted into a game visual. We give an overview of these capabilities below.Muse evaluation of consistency, diversity and persistencyConsistencyWe evaluate consistency by prompting the model with ground truth gameplay sequences and controller actions, and letting the model generate game visuals. The videos shown here are generated using Muse (based on WHAM-1.6B) and demonstrate the models ability to generate consistent gameplay sequences of up to two minutes. In our paper, we also compare the generated visuals to the ground truth visuals using FVD (Frchet Video Distance), an established metric in the video generation community.DiversityMuse (based on WHAM-1.6B) generated examples of behavioral and visual diversity, conditioned on the same initial 10 frames (1 second) of real gameplay. The three examples at the top show behavioral diversity (diverse camera movement, loitering near the spawn location, and navigating various paths to the middle jump pad). The three examples below show visual diversity (different hoverboards for the character). In the paper, we also quantitatively assess diversity using the Wasserstein distance, a measure of distance between two distributions, to compare the model-generated sequences to the diversity reflected in human gameplay recordings. Muse generated examples of behavioral and visual diversity, conditioned on the same 10 frames of real gameplay. Three examples of behavioral diversity show diverse camera movement, loitering near the spawn location, and navigating various paths to the middle jump pad. Three examples of visual diversity show different hoverboards for the character.With our evaluation framework in place, and access to an H100 compute allocation, the team was able to further improve Muse instances, including higher resolution image encoders (our current models generate visuals at a resolution of 300180 pixels, up from the 128128 resolution of our earliest models) and larger models, and expand to all seven Bleeding Edge maps. To show some of the capabilities of the model we are publishing today, we have included videos of 2-minute-long generated gameplay sequences above, which give an impression of the consistency and diversity of gameplay sequences that the model can generate.According to Senior Researcher Tabish Rashid: Being handed an allocation of H100s was initially quite daunting, especially in the early stages figuring out how to make best use of it to scale to larger models with the new image encoders. After months of experimentation, it was immensely rewarding to finally see outputs from the model on a different map (not to knock the lovely greenery of Skygarden) and not have to squint so much at smaller images. Im sure at this point many of us have watched so many videos from Muse that weve forgotten what the real game looks like.One of my favorite capabilities of the model is how it can be prompted with modifications of gameplay sequences and persist newly introduced elements. For example, in the demo below, weve added a character onto the original visual from the game. Prompting the model with the modified visual, we can see how the model persists the added character and generates plausible variants of how the gameplay sequence could have evolved from this modified starting point.PersistencyDemonstrations of how Muse (based on WHAM-1.6B) can persist modifications. A visual is taken from the original gameplay data and an image of an additional character is edited into the image. The generated gameplay sequence shows how the character is adapted into the generated gameplay sequence.ConclusionToday, our team is excited to be publishing our work in Nature and simultaneously releasing Muse open weights, the WHAM Demonstrator, and sample data to the community.I look forward to seeing the many ways in which the community will explore these models and build on our research. I cannot wait to see all the ways that these models and subsequent research will help shape and increase our understanding of how generative AI models of human gameplay may support gameplay ideation and pave the way for future, novel, AI-based game experiences, including the use cases that our colleagues at Xbox (opens in new tab) have already started to explore.Opens in a new tab
    0 Commentaires ·0 Parts ·135 Vue
  • Microsoft Research and Physics Wallah team up to enhance AI-based tutoring
    www.microsoft.com
    In India, limited resources, geographical constraints, and economic factors present barriers to quality higher education for some students.A shortage of teachers, particularly in remote or low-income areas, makes it harder for students to receive the guidance they need to prepare for highly competitive professional and academic programs.Microsoft Research is developing new algorithms and techniques that are enabling Physics Wallah (opens in new tab), a growing educational company, to make its AI-based tutoring services more accurate and reliable, to better support students on their education journey.As in other countries, many Indian students purchase coaching and tutoring services to prepare for entrance exams at top institutions. This includes offline coaching, where hundreds of students meet in a classroom staffed by teachers covering a structured curriculum. Online coaching enables students to learn remotely in a virtual classroom. Hybrid coaching delivers virtual lessons in a physical classroom.Offline courses can cost as much as 100,000 Indian rupees a yearequivalent to hundreds of U.S. dollars. This puts them out of reach for many lower income students living in smaller and mid-sized Indian cities, as well as rural villages. Online courses are much more affordable. They allow students to work at their own pace by providing high-quality web-based content supported by teachers who work remotely.Vineet GovilMeeting this need is the mission of Physics Wallah. The company uses AI to offer on-demand tutoring at scale, curating volumes of standard science- and math-related content to provide the best answers. Some 2 million students use the Physics Wallah platform every day, at a fraction of the cost of offline tutoring. For example, its prep courses for the Joint Entrance Examination (JEE), which is required for admission to engineering and technology programs, and the National Eligibility cum Entrance Test (NEET), a required entrance exam for medical and dental school candidates, cost between 4,200 and 4,500 rupees per year. Thats roughly 50 U.S. dollars.The mantra here really is how do we provide quality education in an affordable manner and accessible to every student, regardless of who they are or where they come from.Vineet Govil, Chief Technology and Product Officer, Physics Wallah TIMELINECelebrating 20 Years at Microsoft Research IndiaMicrosoft Research Indias collaboration with Physics Wallah is part of a 20-year legacy of supporting emerging Indian companies, underscored by the January 2025 announcement that Microsoft will invest $3 billion (opens in new tab) in cloud and AI infrastructure to accelerate the adoption of AI, skilling, and innovation. Physics Wallah has developed an AI-driven educational suite, Alakh AI, leveraging OpenAIs GPT-4o model through Microsoft Azure OpenAI Service. Alakh AIs flagship offerings include AI Guru and the Smart Doubt Engine, both designed to transform the learning experience in and beyond the classroom.AI Guru acts as a personal academic tutor, delivering adaptive guidance based on a students progress, real-time question-solving, and customized content that evolves with their learning journey.Smart Doubt Engine is an AI tool through which students can ask questions (also known as doubts in Indian English) during live classes and receive instant responses.Additionally, the Alakh AI suite includes:AI Grader for subjective answer evaluation without human interventionSahayak for crafting hyper-personalized learning paths tailored to individual students needsThis innovative ecosystem elevates learning efficiency and accessibility for students.AI Guru in action A student asks, Explain Newtons First Law, and the AI tutor provides a detailed explanation along with two videos for further learning.Smart Doubt Engine in action A student asks a clarifying question during a live class, and the AI provides a detailed explanation in real time.How does AI Guru work?Lets say a student had a question about Newtons laws of motion, a core concept in physics. She would type her query into the AI Guru chat window (she could also just talk to it or upload an image from a textbook) and receive a text answer plus images derived from standard textbooks and curated content, typically in just a few seconds. AI Guru also provides a short video where a teacher offers additional context.Getting the technology rightThe Alakh AI suite is powered by OpenAIs foundational models GPT-4 and GPT-4o, integrated with a retrieval-augmented generation (RAG) architecture. It leverages Physics Wallahs rich repository of high-quality curated contentdeveloped and refined over several yearsalong with continuous updates from subject matter experts to ensure new materials, textbooks, tutorials, and question banks are seamlessly incorporated. Despite considerable progress, the existing AI sometimes falters when navigating complex academic problems.The accuracy level of todays large language models (LLMs) is not up to the mark where we can provide reliable and satisfactory answers to the students all the timespecifically, if its a hard mathematical problem involving complex equations, Govil said.Thats one important focus of the collaboration. Researchers from Microsoft Research are developing new algorithms and techniques to enhance the accuracy and reasoning capabilities of AI models. They are now collaborating with Physics Wallah to apply these advancements to the Alakh AI suite, improving its ability to solve complex problems and provide more reliable, step-by-step guidance to students. A key challenge is the nature of student queries, which are often ambiguous and involve multimodal inputstext, images, videos, or audiorequiring unified capabilities to address the problem. Many STEM problems require breaking down complex queries into logical sub-problems and applying high-order, step-by-step reasoning for consistency. Additionally, integrating domain-specific knowledge in advanced math, physics, chemistry, and biology requires contextualization and seamless retrieval of specialized, grade-appropriate information.Microsoft Research is working with Physics Wallah to move beyond traditional next-token prediction and develop AI systems that approach reliable, systematic, step-by-step problem-solving.That includes ongoing work to enhance the models reasoning capabilities and deliver more accurate query answers on complex JEE math problems. Instead of just providing the final answer, the underlying models now break problems into step-by-step solutions. That helps students learn how to solve the actual problems. The AI can also review student answers, detect mistakes, and give detailed feedback, acting as a personal tutor to guide students, improve their understanding, and enhance their learning experience.Microsoft research podcastIdeas: AI and democracy with Madeleine Daepp and Robert Osazuwa NessAs the biggest election year in history comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AIs impact on democracy, including the techs use in Taiwan and India.Listen nowOpens in a new tab Solving complex problems requires enhancing the reasoning capabilities of both large and small language models by training them to not just generate answers, but to systematically think through and reason about complex problems. This requires high-quality reasoning tracesdetailed, step-by-step breakdowns of logical problem-solving processes.To enable this, researchers collaborated with Physics Wallah to curate a dataset of 150,000 high-quality math reasoning traces. These traces serve as the foundation for training specialized small language models (SLMs) using supervised fine-tuning (SFT). Model performance is further refined through training on carefully curated on-policy preference data, ensuring alignment with high-quality reasoning standards. The teams current Phi-based models have already outperformed leading LLMs and other baselines on complex math problems.Building AI systems capable of human-like thinking and reasoning represents a significant challenge.Akshay Nambi, Principal Researcher at Microsoft Research IndiaThe next step is to develop a self-evolving learning pipeline using online reinforcement learning techniques, allowing the model to continuously generate high-quality synthetic data that further enhances its capabilities. Additionally, researchers are building a reward model and integrating it with Monte Carlo Tree Search (MCTS) to optimize reasoning and improve inference-time decision-making.The goal is to develop tools that complement education. To do this, we are enhancing the models capabilities to process, break down, and solve problems step-by-step. We do this by incorporating high-quality data into training to teach the model how to approach such tasks, alongside algorithmic innovations that enable the model to think and reason more effectively.Opening new doors for studentsChandramouleswar ParidaGetting an education at a top university can be life changing for anyone. For Chandramouleswar Parida, it could change the lives of everyone in his home village in Baniatangi, Khordha, Odisha State, India. Chandra decided to become a doctor after watching his grandfather die from a heart attack. The nearest doctor who could have treated him was at a regional hospital 65 kilometers away.He could have been saved if certain procedures had been followed, Chandra said. He wants to study medicine, perhaps receiving advanced training overseas, and then return home. I want to be a doctor here in our village and serve our people, because there is a lack of treatment. Being a doctor is a very noble kind of job in this society.Chandra is the only student in Baniatangi Village, Khordha, Odisha, currently preparing for the NEET. Without Physics Wallah, students like Chandra would likely have no access to the support and resources that cant be found locally.Anushka Sunil DhanwadeAnother student, Anushka Sunil Dhanwade, is optimistic that Physics Wallah will help her dramatically improve her initial score on the NEET exam. While in 11th class, or grade, she joined an online NEET prep class with 800 students. But she struggled to follow the coursework, as the teachers tailored the content to the strongest students. After posting a low score on the NEET exam, her hopes of becoming a doctor were fading.But after a serious stomach illness reminded her of the value of having a doctor in her family, she tried again, this time with Physics Wallah and AI Guru. After finishing 12th class, she began preparing for NEET and plans to take the exams again in May, confident that she will increase her score.AI Guru has made my learning so smooth and easy because it provides me answers related to my study and study-related doubt just within a click.Anushka Sunil Dhanwade, StudentNext steps in the collaborationThe collaboration between Microsoft Research and Physics Wallah aims to apply the advancements in solving math problems across additional subjects, ultimately creating a unified education LLM with enhanced reasoning capabilities and improved accuracy to support student learning.Were working on an education-specific LLM that will be fine-tuned using the extensive data weve gathered and enriched by Microsofts expertise in LLM training and algorithms. Our goal is to create a unified model that significantly improves accuracy and raises student satisfaction rates to 95% and beyond, Govil explained.The teams are also integrating a new tool from Microsoft Research called PromptWizard (opens in new tab), an automated framework for optimizing the instructions given to a model, into Physics Wallahs offerings. New prompts can now be generated in minutes, eliminating months of manual work, while providing more accurate and aligned answers for students.For Nambi and the Microsoft Research India team, the collaboration is the latest example of their deep commitment to cultivating the AI ecosystem in India and translating new technology from the lab into useful business applications.By leveraging advanced reasoning techniques and domain expertise, we are transforming how AI addresses challenges across multiple subjects. This represents a key step in building AI systems that act as holistic personal tutors, enhancing student understanding and creating a more engaging learning experience, Nambi said.Explore more VideoShiksha copilot demoOpens in a new tab
    0 Commentaires ·0 Parts ·141 Vue
  • ExACT: Improving AI agents decision-making via test-time compute scaling
    www.microsoft.com
    Autonomous AI agents are transforming the way we approach multi-step decision-making processes, streamlining tasks like web browsing, video editing, and file management. By applying advanced machine learning, they automate workflows, optimize performance, and reduce the need for human input.However, these systems struggle in complex, dynamic environments. A key challenge lies in balancing exploitation, using known strategies for immediate gains, with exploration, which involves seeking new strategies that could yield long-term benefits. Additionally, they often have difficulty adapting to unpredictable changes in conditions and objectives, as well as generalizing knowledge across contexts, limiting their ability to transfer learned strategies between domains.In response, we developed ExACT, an approach for teaching AI agents to explore more effectively, enabling them to intelligently navigate their environments, gather valuable information, evaluate options, and identify optimal decision-making and planning strategies. ExACT combines two key techniques: Reflective-MCTS (R-MCTS) and Exploratory Learning.Spotlight: Blog postMedFuzz: Exploring the robustness of LLMs on medical challenge problemsMedfuzz tests LLMs by breaking benchmark assumptions, exposing vulnerabilities to bolster real-world accuracy.Read moreOpens in a new tab R-MCTS builds on the traditional Monte Carlo Tree Search (MCTS) algorithm, introducing features like contrastive reflection and a multi-agent debate function. Through contrastive reflection, the agent refines its decision-making by comparing expected outcomes with actual results, allowing it to learn from both its successes and mistakes. The multi-agent debate function provides various evaluations of a given state, where multiple agents offer contrasting perspectives to help provide a balanced and reliable assessment.Exploratory Learning trains agents to navigate environments effectively. Together, these techniques show strong computational scalability during both training and testing, as demonstrated on VisualWebArenaa benchmark for evaluating multimodal autonomous language agents (Figure 1).Figure 1. Evaluation demonstrates the compute scaling properties of GPT-4o during both training and testing. The assessment includes two scenarios: (1) applying the GPT-4o-based R-MCTS agent to all 234 tasks from the Classifieds category in VisualWebArena (left), and (2) testing fine-tuned GPT-4o on 169 previously unseen tasks from Classifieds without using search algorithms (right).R-MCTS extends the classic MCTS by enabling real-time improvements in decision-making. Shown in Figure 2, an iterative feedback loop allows R-MCTS to learn from past experiences, avoid prior mistakes, and focus on more effective actions in similar contexts.Figure 2. Overview of the R-MCTS process in ExACT.Evaluating R-MCTSR-MCTS demonstrates state-of-the-art performanceacross all VisualWebArena environments, surpassing the previous best-performing method, Search Agent, with improvements ranging from 6% to 30% (Table 1). Additionally, as of January 2025, it holds the second position on the OSWorld leaderboard and demonstrates state-of-the-art performancein the blind test setting, where there is no prior access to the test environment, reflecting its advanced capabilities (Table 2).RankModelScore1GPT-4o + ExACT33.702GPT-4o + Search26.403GPT-4o + WebDreamer23.604GPT-4o + ICAL23.405GPT-4o19.786Llama-3-70B + Search16.70Table 1. The VisualWebArena leaderboard highlights R-MCTS as achieving state-of-the-art performance as of December 2024.RankModelBlind TestScore1learn-by-interact w/ Claude-3.5-sonnet22.502ExACT w/ GPT-4o16.603GPT-412.244GPT-4o11.365GPT-4 Vision (0409)10.826learn-by-interact w/ Gemini-1.5-pro10.30Table 2. The OSWorld leaderboard for the category of A11y tree inputs shows that ExACT with GPT-4o ranks second and demonstrates state-of-the-art performancein the blind test setting, as of December 2024.How Exploratory Learning worksExploratory Learning enables agents to dynamically search and adjust their computational resources during testing without depending on MCTS. In contrast to Imitation Learning, which centers on training models using the optimal actions identified through search, Exploratory Learning focuses on cultivating the agents ability to navigate its environment by teaching it to evaluate states, explore different pathways, and efficiently backtrack from unpromising paths to identify more favorable alternatives.Figure 3. In contrast to Imitation Learning, Exploratory Learning uses the entire search trajectory for training.Evaluating Exploratory LearningWe conducted experiments using GPT-4o fine-tuned with Exploratory Learning in the VisualWebArena environment. Results demonstrate the following key benefits:Improved performance: GPT-4o achieves performance improvement, comparable with scaling test-time compute with MCTS, even without search.Test-time compute scaling: GPT-4o performs better when given more actions per task, leading to improved decision-making and task completion, which increased from 5% to 12.4%.Improved generalization on unseen tasks: Exploratory Learning helps fine-tuned GPT-4o handle unseen tasks more effectively than agents trained with Imitation Learning or no additional training.The following video provides a detailed demonstration of how R-MCTS and Exploratory Learning function.Continued explorationAdvancing autonomous AI agents is key to enabling them to handle complex, multi-step tasks with greater precision and adaptability. ExACT represents a significant step toward creating agents that can perform complex decision-making before taking action, leading to improved performance, but challenges remain.How can AI agents improve decision-making in real-world scenarios, where they may be constrained by time or resources? How can they learn effectively and efficiently from environmental feedback? We are currently investigating these questions, and we invite you to explore them with us by building on the ExACT framework. Access the ExACT code at our GitHub repository (opens in new tab).Opens in a new tab
    0 Commentaires ·0 Parts ·104 Vue
  • Ideas: Building AI for population-scale systems with Akshay Nambi
    www.microsoft.com
    Transcript[TEASER][MUSIC PLAYS UNDER DIALOGUE]AKSHAY NAMBI: For me, research is just not about pushing the boundaries of the knowledge. Its about ensuring that these advancements translate to meaningful impact on the ground. So, yes, the big goals that guide most of my work is twofold. One, how do we build technology thats scaled to benefit large populations? And two, at the same time, Im motivated by the challenge of tackling complex problems. That provides opportunity to explore, learn, and also create something new, and thats what keeps me excited.[TEASER ENDS]CHRIS STETKIEWICZ: Youre listening to Ideas, a Microsoft Research Podcast that dives deep into the world of technology research and the profound questions behind the code. In this series, well explore the technologies that are shaping our future and the big ideas that propel them forward.[MUSIC FADES]Im your guest host, Chris Stetkiewicz. Today, Im talking to Akshay Nambi. Akshay is a principal researcher at Microsoft Research. His work lies at the intersection of systems, AI, and machine learning with a focus on designing, deploying, and scaling AI systems to solve compelling real-world problems. Akshays research extends across education, agriculture, transportation, and energy. He is currently working on enhancing the quality and reliability of AI systems by addressing critical challenges such as reasoning, grounding, and managing complex queries.Akshay, welcome to the podcast.AKSHAY NAMBI: Thanks for having me.STETKIEWICZ: Id like to begin by asking you to tell us your origin story. How did you get started on your path? Was there a big idea or experience that captured your imagination or motivated you to do what youre doing today?NAMBI: If I look back, my journey into research wasnt a straight line. It was more about discovering my passion through some unexpected opportunities and also finding purpose along the way. So before I started with my undergrad studies, I was very interested in electronics and systems. My passion for electronics, kind of, started when I was in school. I was more like an average student, not a nerd or not too curious, but I was always tinkering around, doing things, building stuff, and playing with gadgets and that, kind of, made me very keen on electronics and putting things together, and that was my passion. But sometimes things dont go as planned. So I didnt get into the college which I had hoped to join for electronics, so I ended up pursuing computer science, which wasnt too bad either. So during my final year of bachelors, I had to do a final semester project, which turned out to be a very pivotal moment. And thats when I got to know this institute called Indian Institute of Science (IISc), which is a top research institute in India and also globally. And I had a chance to work on a project there. And it was my first real exposure to open-ended research, right, so I remember where we were trying to build a solution that helped to efficiently construct an ontology for a specific domain, which simply means that we were building systems to help users uncover relationships in the data and allow them to query it more efficiently, right. And it was super exciting for me to design and build something new. And that experience made me realize that I wanted to pursue research further. And right after that project, I decided to explore research opportunities, which led me to join Indian Institute of Science again as a research assistant.STETKIEWICZ: So what made you want to take the skills you were developing and apply them to a research career?NAMBI: So interestingly when I joined IISc, the professor I worked with specialized in electronics, so things come back, so something I had always been passionate about. And I was the only computer science graduate in the lab at that time with others being electronic engineers, and I didnt even know how to solder. But the lab environment was super encouraging, collaborative, so I, kind of, caught up very quickly. In that lab, basically, I worked on several projects in the emerging fields of embedded device and energy harvesting systems. Specifically, we were designing systems that could harvest energy from sources like sun, hydro, and even RF (radio frequency) signals. And my role was kind of twofold. One, I designed circuits and systems to make energy harvesting more efficient so that you can store this energy. And then I also wrote programs, software, to ensure that the harvested energy can be used efficiently. For instance, as we harvest some of this energy, you want to have your programs run very quickly so that you are able to sense the data, send it to the server in an efficient way. And one of the most exciting projects I worked during that time was on data-driven agriculture. So this was back in 2008, 2009, right, where we developed an embedded system device with sensors to monitor the agricultural fields, collecting data like soil moisture, soil temperature. And that was sent to the agronomists who were able to analyze this data and provide feedback to farmers. In many remote areas, still access to power is a huge challenge. So we used many of the technologies we were developing in the lab, specifically energy harvesting techniques, to power these sensors and devices in the rural farms, and thats when I really got to see firsthand how technology could help peoples lives, particularly in rural settings. And thats what, kind of, stood out in my experience at IISc, right, was that it was [the] end-to-end nature of the work. And it was not just writing code or designing circuits. It was about identifying the real-world problems, solving them efficiently, and deploying solutions in the field. And this cemented my passion for creating technology that solves real-world problems, and thats what keeps me driving even today.STETKIEWICZ: And as youre thinking about those problems that you want to try and solve, where did you look for, for inspiration? It sounds like some of these are happening right there in your home.NAMBI: Thats right. Growing up and living in India, Ive been surrounded by these, kind of, many challenges. And these are not distant problems. These are right in front of us. And some of them are quite literally outside the door. So being here in India provides a unique opportunity to tackle some of the pressing real-world challenges in agriculture, education, or in road safety, where even small advancements can create significant impact.STETKIEWICZ: So how would you describe your research philosophy? Do you have some big goals that guide you?NAMBI: Right, as I mentioned, right, my research philosophy is mainly rooted in solving real-world problems through end-to-end innovation. For me, research is just not about pushing the boundaries of the knowledge. Its about ensuring that these advancements translate to meaningful impact on the ground, right. So, yes, the big goals that guide most of my work is twofold. One, how do we build technology thats scaled to benefit large populations? And two, at the same time, Im motivated by the challenge of tackling complex problems. That provides opportunity to explore, learn, and also create something new. And thats what keeps me excited.STETKIEWICZ: So lets talk a little bit about your journey at Microsoft Research. I know you began as an intern, and some of the initial work you did was focused on computer vision, road safety, energy efficiency. Tell us about some of those projects.NAMBI: As I was nearing the completion of my PhD, I was eager to look for opportunities in industrial labs, and Microsoft Research obviously stood out as an exciting opportunity. And additionally, the fact that Microsoft Research India was in my hometown, Bangalore, made it even more appealing. So when I joined as an intern, I worked together with Venkat Padmanabhan, who now leads the lab, and we started this project called HAMS, which stands for Harnessing Automobiles for Safety. As you know, road safety is a major public health issue globally, responsible for almost 1.35 million fatalities annually and with the situation being even more severe in countries like India. For instance, there are estimates that theres a life lost on the road every four minutes in India. When analyzing the factors which affect road safety, we saw mainly three elements. One, the vehicle. Second, the infrastructure. And then the driver. Among these, the driver plays the most critical role in many incidents, whether its over-speeding, driving without seat belts, drowsiness, fatigue, any of these, right. And this realization motivated us to focus on driver monitoring, which led to the development of HAMS. In a nutshell, HAMS is basically a smartphone-based system where youre mounting your smartphone on a windshield of a vehicle to monitor both the driver and the driving in real time with the goal of improving road safety. Basically, it observes key aspects such as where the driver is looking, whether they are distracted or fatigued[1], while also considering the external driving environment, because we truly believe to improve road safety, we need to understand not just the drivers action but also the context in which they are driving. For example, if the smartphones accelerometer detects sharp braking, the system would automatically check the distance to the vehicle in the front using the rear camera and whether the driver was distracted or fatigued using the front camera. And this holistic approach ensures a more accurate and comprehensive assessment of the driving behavior, enabling a more meaningful feedback.STETKIEWICZ: So that sounds like a system thats got several moving parts to it. And I imagine you had some technical challenges you had to deal with there. Can you talk about that?NAMBI: One of our guiding principles in HAMS was to use commodity, off-the-shelf smartphone devices, right. This should be affordable, in the range of $100 to $200, so that you can just take out regular smartphones and enable this driver and driving monitoring. And that led to handling several technical challenges. For instance, we had to develop efficient computer vision algorithms that could run locally on the device with cheap smartphone processing units while still performing very well at low-light conditions. We wrote multiple papers and developed many of the novel algorithms which we implemented on very low-cost smartphones. And once we had such a monitoring system, right, you can imagine theres several deployment opportunities, starting from fleet monitoring to even training new drivers, right. However, one application we hadnt originally envisioned but turned out to be its most impactful use case even today is automated drivers license testing. As you know, before you get a license, a driver is supposed to pass a test, but what happens in many places, including India, is that licenses are issued with very minimal or no actual testing, leading to unsafe and untrained drivers on the road. At the same time as we were working on HAMS, Indian government were looking at introducing technology to make testing more transparent and also automated. So we worked with the right set of partners, and we demonstrated to the government that HAMS could actually completely automate the entire license testing process. So we first deployed this system in Dehradun RTO (Regional Transport Office)which is the equivalent of a DMV in the USin 2019, working very closely with RTO officials to define what should be some of the evaluation criteria, right. Some of these would be very simple like, oh, is it the same candidate who is taking the test who actually registered for the test, right? And whether they are wearing seat belts. Did they scan their mirrors before taking a left turn and how well they performed in tasks like reverse parking and things like that.STETKIEWICZ: So whats been the government response to that? Have they embraced it or deployed it in a wider extent?NAMBI: Yes, yes. So after the deployment in Dehradun in 2019, we actually open sourced the entire HAMS technology and our partners are now working with several state governments and scaled HAMS to several states in India. And as of today, we have around 28 RTOs where HAMS is actually being deployed, and the pass rate of such license test is just 60% as compared to 90-plus percent with manual testing. Thats the extensive rigor the system brings in. And now what excites me is after nearly five years later, we are now taking the next step in this project where we are now evaluating the long-term impact of this intervention on driving behavior and road safety. So we are collaborating with Professor Michael Kremer, who is a Nobel laureate and professor at University of Chicago, and his team to study how this technology has influenced driving patterns and accident rates over time. So this focus on closing the loop and moving beyond just deployment in the field to actually measuring the real impact, right, is something that truly excites me and that makes research at Microsoft is very unique. And that is actually one of the reasons why I joined Microsoft Research as a full-time after my internship, and this unique flexibility to work on real-world problems, develop novel research ideas, and actually collaborate with partners both internally and externally to deploy at scale is something that is very unique here.STETKIEWICZ: So have you actually received any evidence that the project is working? Is driving getting safer?NAMBI: Yes, these are very early analysis, and there are very positive insights we are getting from that. Soon we will be releasing a white paper on our study on this long-term impact.STETKIEWICZ: Thats great. I look forward to that one. So youve also done some interesting work involving the Internet of Things, with an emphasis on making it more reliable and practical. So for those in our audience who may not know, the Internet of Things, or IoT, is a network that includes billions of devices and sensors in things like smart thermostats and fitness trackers. So talk a little bit about your work in this area.NAMBI: Right, so IoT, as you know, is already transforming several industries with billions of sensors being deployed in areas like industrial monitoring, manufacturing, agriculture, smart buildings, and also air pollution monitoring. And if you think about it, these sensors provide critical data that businesses rely for decision making. However, a fundamental challenge is ensuring that the data collected from these sensors is actually reliable. If the data is faulty, it can lead to poor decisions and inefficiencies. And the challenge is that these sensor failures are always not obvious. What I mean by that is when a sensor stops working, it always doesnt stop sending data, but it often continues to send some data which appear to be normal. And thats one of the biggest problems, right. So detecting these errors is non-trivial because the faulty sensors can mimic real-world working data, and traditional solutions like deploying redundant sensors or even manually inspecting them are very expensive, labor intensive, and also sometimes infeasible, especially for remote deployments. Our goal in this work was to develop a simple and efficient way to remotely monitor the health of the IoT sensors. So what we did was we hypothesized that most sensor failures occurred due to the electronic malfunctions. It could be either due to short circuits or component degradation or due to environmental factors such as heat, humidity, or pollution. Since these failures originate within the sensor hardware itself, we saw an opportunity to leverage some of the basic electronic principles to create a novel solution. The core idea was to develop a way to automatically generate a fingerprint for each sensor. And by fingerprint, I mean the unique electrical characteristic exhibited by a properly working sensor. We built a system that could devise these fingerprints for different types of sensors, allowing us to detect failures purely based on the sensors internal characteristics, that is the fingerprint, and even without looking at the data it produces. Essentially what it means now is that we were able to tag each sensor data with a reliability score, ensuring verifiability.STETKIEWICZ: So how does that technology get deployed in the real world? Is there an application where its being put to work today?NAMBI: Yes, this technology, we worked together with Azure IoT and open-sourced it where there were several opportunities and several companies took the solution into their systems, including air pollution monitoring, smart buildings, industrial monitoring. The one which I would like to talk about today is about air pollution monitoring. As you know, air pollution is a major challenge in many parts of the world, especially in India. And traditionally, air quality monitoring relies on these expensive fixed sensors, which provide limited coverage. On the other hand, there is a rich body of work on low-cost sensors, which can offer wider deployment. Like, you can put these sensors on a bus or a vehicle and have it move around the entire city, where you can get much more fine-grained, accurate picture on the ground. But these are often unreliable because these are low-cost sensors and have reliability issues. So we collaborated with several startups who were developing these low-cost air pollution sensors who were finding it very challenging to gain trust because one of the main concerns was theaccuracy of the data from low-cost sensors. So our solution seamlessly integrated with these sensors, which enabled verification of the data quality coming out from these low-cost air pollution sensors. So this bridged the trust gap, allowing government agenciesto initiate large-scale pilots using low-cost sensors for fine-grain air-quality monitoring.STETKIEWICZ: So as were talking about evolving technology, large language models, or LLMs, are also enabling big changes, and theyre not theoretical. Theyre happening today. And youve been working on LLMs and their applicability to real-world problems. Can you talk about your work there and some of the latest releases?NAMBI: So when ChatGPT was first released, I, like many people, was very skeptical. However, I was also curious both of how it worked and, more importantly, whether it could accelerate solutions to real-world problems. That led to the exploration of LLMs in education, where we fundamentally asked this question, can AI help improve educational outcomes? And this was one of the key questions which led to the development of Shiksha copilot, which is a genAI-powered assistant designed to support teachers in their daily work, starting from helping them to create personalized learning experience, design assignments, generate hands-on activities, and even more. Teachers today universally face several challenges, from time management to lesson planning. And our goal with Shiksha was to empower them to significantly reduce the time spent on this task. For instance, lesson planning, which traditionally took about 60 minutes, can now be completed in just five minutes using the Shiksha copilot. And what makes Shiksha unique is that its completely grounded in the local curriculum and the learning objectives, ensuring that the AI-generated content aligns very well with the pedagogical best practices. The system actually supports multilingual interactions, multimodal capabilities, and also integration with external knowledge base, making it very highly adaptable for different curriculums. Initially, many teachers were skeptical. Some feared this would limit their creativity. However, as they began starting to use Shiksha, they realized that it didnt replace their expertise, but rather amplified it, enabling them to do work faster and more efficiently.STETKIEWICZ: So, Akshay, the last time you and I talked about Shiksha copilot, it was very much in the pilot phase and the teachers were just getting their hands on it. So it sounds like, though, youve gotten some pretty good feedback from them since then.NAMBI: Yes, so when we were discussing, we were doing this six-month pilot with 50-plus teachers where we gathered overwhelming positive feedback on how technologies are helping teachers to reduce time in their lesson planning. And in fact, they were using the system so much that they really enjoyed working with Shiksha copilot where they were able to do more things with much less time, right. And with a lot of feedback from teachers, we have improved Shiksha copilot over the past few months. And starting this academic year, we have already deployed Shiksha to 1,000-plus teachers in Karnataka. This is with close collaboration with our partners in with the Sikshana Foundation and also with the government of Karnataka. And the response has been already incredibly encouraging. And looking ahead, we are actually focusing on again, closing this loop, right, and measuring the impact on the ground, where we are doing a lot of studies with the teachers to understand not just improving efficiency of the teachers but also measuring how AI-generated content enriched by teachers is actually enhancing student learning objectives. So thats the study we are conducting, which hopefully will close this loop and understand our original question that, can AI actually help improve educational outcomes?STETKIEWICZ: And is the deployment primarily in rural areas, or does it include urban centers, or whats the target?NAMBI: So the current deployment with 1,000 teachers is a combination of both rural and urban public schools. These are covering both English medium and Kannada medium teaching schools with grades from Class 5 to Class 10.STETKIEWICZ: Great. So Shiksha was focused on helping teachers and making their jobs easier, but I understand youre also working on some opportunities to use AI to help students succeed. Can you talk about that?NAMBI: So as you know, LLMs are still evolving and inherently they are fragile, and deploying them in real-world settings, especially in education, presents a lot of challenges. With Shiksha, if you think about it, teachers remain in control throughout the interaction, making the final decision on whether to use the AI-generated content in the classroom or not. However, when it comes to AI tutors for students, the stakes are slightly higher, where we need to ensure the AI doesnt produce incorrect answers, misrepresent concepts, or even mislead explanations. Currently, we are developing solutions to enhance accuracy and also the reasoning capabilities of these foundational models, particularly solving math problems. This represents a major step towards building AI systems thats much more holistic personal tutors, which help student understanding and create more engaging, effective learning experience.STETKIEWICZ: So youve talked about working in computer vision and IoT and LLMs. What do those areas have in common? Is there some thread that weaves through the work that youre doing?NAMBI: Thats a great question. As a systems researcher, Im quite interested in this end-to-end systems development, which means that my focus is not just about improving a particular algorithm but also thinking about the end-to-end system, which means that I, kind of, think about computer vision, IoT, and even LLMs as tools, where we would want to improve them for a particular application. It could be agriculture, education, or road safety. And then how do you think this holistically to come up with the best efficient system that can be deployed at population scale, right. I think thats the connecting story here, that how do you have this systemic thinking which kind of takes the existing tools, improves them, makes it more efficient, and takes it out from the lab to real world.STETKIEWICZ: So youre working on some very powerful technology that is creating tangible benefits for society, which is your goal. At the same time, were still in the very early stages of the development of AI and machine learning. Have you ever thought about unintended consequences? Are there some things that could go wrong, even if we get the technology right? And does that kind of thinking ever influence the development process?NAMBI: Absolutely. Unintended consequences are something I think about deeply. Even the most well-designed technology can have these ripple effects that we may not fully anticipate, especially when we are deploying it at population scale. For me, being proactive is one of the key important aspects. This means not only designing the technology at the lab but actually also carefully deploying them in real world, measuring its impact, and working with the stakeholders to minimize the harm. In most of my work, I try to work very closely with the partner team on the ground to monitor, analyze, how the technology is being used and what are some of the risks and how can we eliminate that. At the same time, I also remain very optimistic. Its also about responsibility. If we are able to embed societal values, ethics, into the design of the system and involve diverse perspectives, especially from people on the ground, we can remain vigilant as the technology evolves and we can create systems that can truly deliver immense societal benefits while addressing many of the potential risks.STETKIEWICZ: So weve heard a lot of great examples today about building technology to solve real-world problems and your motivation to keep doing that. So as you look ahead, where do you see your research going next? How will people be better off because of the technology you develop and the advances that they support?NAMBI: Yeah, Im deeply interested in advancing AI systems that can truly assist anyone in their daily tasks, whether its providing personalized guidance to a farmer in a rural village, helping a student get instant 24 by 7 support for their learning doubts, or even empowering professionals to work more efficiently. And to achieve this, my research is focusing on tackling some of the fundamental challenges in AI with respect to reasoning and reliability and also making sure that AI is more context aware and responsive to evolving user needs. And looking ahead, I envision AI as not just an assistant but also as an intelligent and equitable copilot seamlessly integrated into our everyday life, empowering individuals across various domains.STETKIEWICZ: Great. Well, Akshay, thank you for joining us on Ideas. Its been a pleasure.[MUSIC]NAMBI: Yeah, I really enjoyed talking to you, Chris. Thank you.STETKIEWICZ: Till next time.[MUSIC FADES]
    0 Commentaires ·0 Parts ·129 Vue
  • Advances to low-bit quantization enable LLMs on edge devices
    www.microsoft.com
    Large language models (LLMs) are increasingly being deployed on edge deviceshardware that processes data locally near the data source, such as smartphones, laptops, and robots. Running LLMs on these devices supports advanced AI and real-time services, but their massive size, with hundreds of millions of parameters, requires significant memory and computational power, limiting widespread adoption. Low-bit quantization, a technique that compresses models and reduces memory demands, offers a solution by enabling more efficient operation.Recent advances in low-bit quantization have made mixed-precision matrix multiplication (mpGEMM) viable for LLMs. This deep learning technique allows data of the same or different formats to be multiplied, such as int8*int1, int8*int2, or FP16*int4. By combining a variety of precision levels, mpGEMM strikes a balance among speed, memory efficiency, and computational accuracy.However, most hardware supports only symmetric computationsoperations on data of similar formatscreating challenges for mixed-precision calculations during General Matrix Multiplication (GEMM), a critical operation for LLMs. Overcoming these hardware limitations is essential to fully benefit from mpGEMM and support asymmetrical computations.To unlock the potential of low-bit quantization on resource-constrained edge devices, hardware must natively support mpGEMM. To address this, we developed the following three approaches for computing kernels and hardware architectures:Ladder data type compiler: Supports various low-precision data types by converting unsupported types into hardware-compatible ones without data loss, while also generating high-performance conversion code.T-MAC mpGEMM library: Implements GEMM using a lookup table (LUT) approach, eliminating multiplications to significantly reduce computational overhead. Optimized for diverse CPUs, T-MAC delivers several times the speed of other libraries.LUT Tensor Core hardware architecture: Introduces a cutting-edge design for next-generation AI hardware, tailored for low-bit quantization and mixed-precision computations.The following sections describe these techniques in detail.Ladder: Bridging the gap between custom data and hardware limitsCutting-edge hardware accelerators, such as GPUs, TPUs, and specialized chips, are designed to speed up computationally intensive tasks like deep learning by efficiently handling large-scale operations. These accelerators now integrate lower-bit computing units, such as FP32, FP16, and even FP8, into their architectures.However, constraints in chip area and hardware costs limit the availability of these units for standard data types. For instance, the NVIDIA V100 Tensor Core GPU supports only FP16, while the A100 supports int2, int4, and int8 but not newer formats like FP8 or OCP-MXFP. Additionally, the rapid development of LLMs often outpaces hardware upgrades, leaving many new data formats unsupported and complicating deployment.Additionally, while hardware accelerators may lack direct support for custom data types, their memory systems can convert these types into fixed-width data blocks that store any data format. For instance, NF4 tensors can be converted into FP16 or FP32 for floating-point operations.Building on these insights, we developed the Ladder data type compiler, a method to separate data storage from computation, enabling broader support for custom data types. It bridges the gap between emerging custom data formats with the precision types supported by current hardware.Ladder offers a flexible system for converting between algorithm-specific and hardware-supported data types without data loss. For low-bit applications, it optimizes performance by translating low-bit data into the most efficient formats for the hardware being used. As shown in Figure 1, this includes mapping low-bit computations to supported instructions and efficiently managing data storage across the memory hierarchy.Figure 1: The Ladder architectureEvaluating LadderEvaluations of Ladder on NVIDIA and AMD GPUs show that it outperforms existing deep neural network (DNN) compilers for natively supported data types. It also handles custom data types not supported by GPUs, achieving speedups of up to 14.6 times.As the first system to support custom low-precision data types for running DNNs on modern hardware accelerators, Ladder provides researchers with flexibility in optimizing data types. It also enables hardware developers to support a wider range of data types without requiring hardware modifications.T-MAC: Table-lookup for mpGEMM without multiplicationDeploying low-bit quantized LLMs on edge devices often requires dequantizing models to ensure hardware compatibility. However, this approach has two major drawbacks:Performance: Dequantization overhead can result in poor performance, negating the benefits of low-bit quantization.Development: Developers must redesign data layouts and kernels for different mixed precisions.To address these challenges, we introduce T-MAC, a novel LUT-based method that enables mpGEMM without dequantization or multiplication.T-MAC replaces traditional multiplication operations with bit-wise table lookups, offering a unified and scalable solution for mpGEMM. It incorporates techniques to reduce the size of tables and store them directly on the chip, minimizing the overhead of accessing data from memory. By eliminating dequantization and lowering computational costs, T-MAC enables efficient inference of low-bit LLMs on resource-constrained edge devices. Figure 2 illustrates T-MACs architecture.Figure 2. Overview of the T-MAC systemEvaluating T-MACPerformance evaluations of T-MAC on low-bit models demonstrated substantial benefits in efficiency and speed. On the Surface Laptop 7 with the Qualcomm Snapdragon X Elite chipset, T-MAC achieved:48 tokens per second for the 3B BitNet-b1.58 model30 tokens per second for the 2-bit 7B Llama model20 tokens per second for the 4-bit 7B Llama modelThese speeds far exceed average human reading rates, outperforming llama.cpp by 45 times and doubling the speed of a dedicated NPU accelerator.Even on lower-end devices like the Raspberry Pi 5, T-MAC made it possible for the 3B BitNet-b1.58 model to generate 11 tokens per second. It also proved highly power-efficient, matching llama.cpps generation rate while using only 1/4 to 1/6 of the CPU cores.These results establish T-MAC as a practical solution for deploying LLMs on edge devices with standard CPUs, without relying on GPUs or NPUs. T-MAC allows LLMs to run efficiently on resource-constrained devices, expanding their applicability across a wider range of scenarios.LUT Tensor Core: Driving hardware for mpGEMMWhile T-MAC and Ladder optimize mpGEMM on existing CPU and GPU architectures, improving computational efficiency, they cannot match the performance of dedicated hardware accelerators with built-in LUT support. Achieving significant improvements in performance, power, and area (PPA) requires overcoming four key challenges:Table precompute and storage: Precomputing and storing LUTs add overhead, increasing area usage, latency, and storage requirements, which can reduce overall efficiency gains.Bit-width flexibility: Hardware must support various precision levels, such as int4/2/1 for weights and FP16/8 or int8 for activations, along with their combinations. This flexibility is crucial for accommodating diverse model architectures and use cases.LUT tiling shape: Inefficient tiling shapes can raise storage costs and limit reuse opportunities, adversely affecting performance and efficiency.Instruction and compilation: LUT-based mpGEMM requires a new instruction set. Existing compilation stacks, designed for standard GEMM hardware, may not optimally map and schedule these instructions, complicating integration with LLM inference software.In response, we developed LUT Tensor Core, a software-hardware codesign for low-bit LLM inference. To address precomputation overhead in conventional LUT-based methods, we introduce techniques like software-based DFG transformation, operator fusion, and table symmetrization to optimize table precomputation and storage. Additionally, we propose a hardware design with an elongated tiling shape to support table reuse and a bit-serial design to handle various precision combinations in mpGEMM.To integrate with existing GPU microarchitectures and software stacks, we extended the MMA instruction set, added new LMMA instructions, and developed a cuBLAS-like software stack for easy integration into existing DNN frameworks. We also created a compiler for end-to-end execution planning on GPUs with LUT Tensor Core. This design and workflow, illustrated in Figure 3, enabled the quick and seamless adoption of LUT Tensor Core.Figure 3. The LUT Tensor Core workflowEvaluating LUT Tensor CoreTesting LUT Tensor Core on low-bit LLMs, such as BitNet and Llama, showed significant performance gains, achieving 6.93 times the inference speed while using just 38.3% of the area of a traditional Tensor Core. With nearly identical model accuracy, this results in a 20.9-fold increase in computational density and an 11.2-fold boost in energy efficiency. As AI models grow in scale and complexity, LUT Tensor Core enables low-bit LLMs to be applied in new and diverse scenarios.We believe the LUT technique could drive a paradigm shift in AI model inference. Traditional methods rely on multiplication and accumulation operations, whereas LUT implementations provide higher transistor density, greater throughput per chip area, lower energy costs, and better scalability. As large models adopt low-bit quantization, the LUT method could become the standard for system and hardware design, advancing the next generation of AI hardware innovation.Unlocking new possibilities for embodied AILow-bit quantization improves the efficiency of running large models on edge devices while also enabling model scaling by reducing the bits used to represent each parameter. This scaling enhances model capabilities, generality, and expressiveness, as shown by the BitNet model, which starts with a low-bit configuration and expands.Technologies like T-MAC, Ladder, and LUT Tensor Core provide solutions for running low-bit quantized LLMs, supporting efficient operation across edge devices and encouraging researchers to design and optimize LLMs using low-bit quantization. By reducing memory and computational demands, low-bit LLMs could power embodied AI systems, such as robots, enabling dynamic perception and real-time environmental interaction.T-MAC (opens in new tab) and Ladder (opens in new tab) are open source and available on GitHub. We invite you to test and explore these innovations in AI technology with Microsoft Research.Spotlight: blog postGraphRAG auto-tuning provides rapid adaptation to new domainsGraphRAG uses LLM-generated knowledge graphs to substantially improve complex Q&A over retrieval-augmented generation (RAG). Discover automatic tuning of GraphRAG for new datasets, making it more accurate and relevant.Read moreOpens in a new tab Opens in a new tab
    0 Commentaires ·0 Parts ·156 Vue
  • Research Focus: Week of January 27, 2025
    www.microsoft.com
    In this edition:We introduce FLAVARS, a multimodal foundation language and vision alignment model for remote sensing; Managed-retention memory, a new class of memory which is more optimized to store key data structures for AI inference workloads; and Enhanced detection of macular telangiectasia type 2 (MacTel 2) using self-supervised learning and ensemble models.We present a new approach to generalizing symbolic automata, which brings together a variety of classic automata and logics in a unified framework with all the necessary ingredients to support symbolic model checking moduloA.And we invite you to join an upcoming workshop: LLM4Eval@WSDM 2025: Large Language Models for Evaluation in Information Retrieval. LLM4Eval is a promising technique in the areas of automated judgments, natural language generation, and retrieval augmented generation (RAG) systems. Researchers from Microsoft and experts from industry and academia will explore this technique at an interactive workshop on Friday, March 14, in Hanover, Germany.NEW RESEARCHIn the field of remote sensing, imagery is generally dense with objects and visual content which can vary regionally across the globe. This creates a need for vision-language datasets to be highly detailed when describing imagery, and for pretraining to better balance visual task performance while retaining the ability to perform zero-shot classification and image-text retrieval.One strategy is to combine paired satellite images and text captions for pretraining performant encoders for downstream tasks. However, while contrastive image-text methods like CLIP enable vision-language alignment and zero-shot classification ability, CLIPs vision-only downstream performance tends to degrade compared to image-only pretraining, such as Masked Autoencoders (MAE).To better approach multimodal pretraining for remote sensing, researchers from Microsoft propose a pretraining method that combines the best of both contrastive learning and masked modeling, along with geospatial alignment via contrastive location encoding, in the recent paper: FLAVARS: A Multimodal Foundational Language and Vision Alignment Model for Remote Sensing. The research shows that FLAVARS significantly outperforms a baseline of SkyCLIP for vision-only tasks such as KNN classification and semantic segmentation, +6% mIOU on SpaceNet1, while retaining the ability to perform zero-shot classification, unlike MAE pretrained methods.Read the paperNEW RESEARCHAI clusters today are one of the major uses of high bandwidth memory (HBM), a high-performance type of computer memory. However, HBM is suboptimal for AI inference workloads for several reasons. Analysis shows that HBM is overprovisioned on write performance, underprovisioned on density and read bandwidth, and has significant energy-per-bit overhead. It is also expensive, with lower yield than DRAM due to manufacturing complexity.In a recent paper: Managed-Retention Memory: A New Class of Memory for the AI Era, researchers from Microsoft propose a memory class which is more optimized to store key data structures for AI inference workloads. The paper makes the case that MRM may finally provide a path to viability for technologies that were originally proposed to support storage class memory (SCM). These technologies traditionally offered long-term persistence (10+ years) but provided poor IO performance and/or endurance. MRM makes different trade-offs, and by understanding the workload IO patterns, MRM foregoes long-term data retention and write performance for better potential performance on the metrics important for AI inference.Read the paperNEW RESEARCHMacular telangiectasia type 2 (MacTel) is a retinal disease that is challenging to diagnose. While increased awareness has led to improved diagnostic outcomes, MacTel diagnosis relies significantly upon a multimodal image set and the expertise of clinicians familiar with the disease. Optical coherence tomography (OCT) imaging has emerged as a valuable tool for the diagnosis and monitoring of various retinal diseases.With the increasing integration of OCT into clinical practice, deep learning models may be able to achieve accurate MacTel prediction comparable to that of retinal specialists, even when working with limited data.Researchers from Microsoft and external colleagues address this challenge in a recent paper: Enhanced Macular Telangiectasia Type 2 Detection: Leveraging Self-Supervised Learning and Ensemble Models. Published in the journal of Ophthalmology Science, the paper focuses on the accurate classification of macular telangiectasia type 2 using OCT images, with the overarching goal of facilitating early and precise detection of this neurodegenerative disease.The researchers present results leveraging self-supervised learning and ensemble models, showing their approach improves both MacTel classification accuracy and interpretability when compared to the use of individual models. Ensemble models exhibited superior agreement with the assessments of the most experienced individual human experts, as well as the ensemble of human experts.Read the paperMicrosoft research podcastCollaborators: Silica in space with Richard Black and Dexter GreeneCollege freshman Dexter Greene and Microsoft research manager Richard Black discuss how technology that stores data in glass is supporting students as they expand earlier efforts to communicate what it means to be human to extraterrestrials.Listen nowOpens in a new tab NEW RESEARCHSymbolic automata are finite state automata that support potentially infinite alphabets, such as the set of rational numbers, generally applied to regular expressions and languages over finite words. In symbolic automata (or automata moduloA), an alphabet is represented by an effective Boolean algebraA, supported by a decision procedure for satisfiability. Regular languages over infinite words (so called -regular languages) have a rich history paralleling that of regular languages over finite words, with well-known applications to model checking via Bchi automata and temporal logics.In a recent paper: Symbolic Automata: Omega-Regularity Modulo Theories, researchers from Microsoft generalize symbolic automata to support -regular languages viatransition termsandsymbolic derivatives. This brings together a variety of classic automata and logics in a unified framework that provides all the necessary ingredients to support symbolic model checking moduloA.Read the paperEVENTLLMs have shown increasing task-solving abilities not present in smaller models. Using LLMs for automated evaluation (LLM4Eval) is a promising technique in the areas of automated judgments, natural language generation, and retrieval augmented generation (RAG) systems.Join researchers from Microsoft and experts from industry and academia for a discussion on using LLMs for evaluation in information retrieval at LLM4Eval Workshop WSDM 2025 (opens in new tab), March 14, 2025, in Hanover, Germany.This interactive workshop will cover automated judgments, RAG pipeline evaluation, altering human evaluation, robustness, and trustworthiness of LLMs for evaluation in addition to their impact on real-world applications. The organizers believe that the information retrieval community can significantly contribute to this growing research area by designing, implementing, analyzing, and evaluating various aspects of LLMs with applications to LLM4Eval tasks.Learn more about the workshopMicrosoft Research | In case you missed itMicrosoft Team Uses Diffusion Model For Materials ScienceJanuary 21, 2025Finding a new material for a target application is like finding a needle in a haystack, write the authors of a blog post at Microsoft, where they have been working on just such a program, something called, aptly, MatterGen. Microsoft AutoGen v0.4: A turning point toward more intelligent AI agents for enterprise developersJanuary 18, 2025The world of AI agents is undergoing a revolution, and Microsofts release of AutoGen v0.4 this week marked a significant leap forward in this journey. Positioned as a robust, scalable and extensible framework, AutoGen represents Microsofts latest attempt to address the challenges of building multi-agent systems for enterprise applications. 2 AI breakthroughs unlock new potential for health and scienceJanuary 17, 2025Two new research papers published this week in scientific journals, one in Nature and one in Nature Machine Intelligence, show how generative AI foundation models can exponentially speed up scientific discovery of new materials and help doctors access and analyze radiology results faster. ChatGPT gets proactive with 'Tasks'January 15, 2025Good morning, AI enthusiasts. OpenAIs AI agent era just got its unofficial start with ChatGPT gaining the ability to schedule and manage daily tasks. With Tasks rolling out and mysterious Operator whispers in the air, is OpenAI finally ready to move from chatbots to full-on autonomous assistants? Mayo Clinic and Microsoft partner to advance generative AI in radiologyJanuary 15, 2025The Mayo Clinic is seeking to advance the use of generative artificial intelligence in imaging through a new collaboration with Microsoft Research. The duo made the announcement during the 43rd Annual J.P. Morgan Healthcare Conference taking place now in San Francisco. View more news and awards Opens in a new tab
    0 Commentaires ·0 Parts ·183 Vue
  • Ideas: Bug hunting with Shan Lu
    www.microsoft.com
    Transcript[TEASER][MUSIC PLAYS UNDER DIALOGUE]SHAN LU: I remember, you know, those older days myself, right. That is really, like, I have this struggle that I feel like I can do better. I feel like I have ideas to contribute. But just for whatever reason, right, it took me forever to learn something which I feel like its a very mechanical thing, but it just takes me forever to learn, right. And then now actually, I see this hope, right, with AI. You know, a lot of mechanical things that can actually now be done in a much more automated way, you know, by AI, right. So then now truly, you know, my daughter, many girls, many kids out there, right, whatever, you know, they are good at, their creativity, itll be much easier, right, for them to contribute their creativity to whatever discipline they are passionate about.[TEASER ENDS]GRETCHEN HUIZINGA:Ideas, a Microsoft Research Podcast that dives deep into the world of technology research and the profound questions behind the code. Im Gretchen Huizinga. In this series, well explore the technologies that are shaping our future and the big ideas that propel them forward.[MUSIC FADES]Today Im talking to Shan Lu, a senior principal research manager at Microsoft Research and a computer science professor at the University of Chicago. Part of the Systems Research Group, Shan and her colleagues are working to make our computer systems, and I quote, secure, scalable, fault tolerant, manageable, fast, and efficient. Thats no small order, so Im excited to explore the big ideas behind Shans influential research and find out more about her reputation as a bug bounty hunter. Shan Lu, welcome to Ideas!SHAN LU: Thank you.HUIZINGA: So I like to start these episodes with what Ive been calling the research origin story, and you have a unique, almost counterintuitive, story about what got you started in the field of systems research. Would you share that story with our listeners?LU: Sure, sure. Yeah. I grew up fascinating that I will become mathematician. I think I was good at math, and at some point, actually, until, I think, I entered college, I was still, you know, thinking about, should I do math? Should I do computer science? For whatever reason, I think someone told me, you know, doing computer science will help you; its easier to get a job. And I reluctantly pick up computer science major. And then there was a few years in my college, I had a really difficult time for programming. And I also remember that there was, like, I spent a lot of time learning one languagewe started with Pascaland I feel like I finally know what to do and then theres yet another language, C, and another class, Java. And I remember, like, the teacher will ask us to do a programming project, and there are times I dont even, I just dont know how to get started. And I remember, at that time, in my class, I think there were we only had like four girls taking this class that requires programming in Java, and none of us have learned Java before. And when we ask our classmates, when we ask the boys, they just naturally know what to do. It was really, really humiliating. Embarrassing. I had the feeling that, I felt like Im just not born to be a programmer. And then, I came to graduate school. I was thinking about, you know, what kind of research direction I should do. And I was thinking that, oh, maybe I should do theory research, like, you know, complexity theory or something. You know, after a lot of back and forth, I met my eventual adviser. She was a great, great mentor to me, and she told me that, hey, Shan, you know, my group is doing research about finding bugs in software. And she said her group is doing system research, and she said a lot of current team members are all great programmers, and as a result, they are not really well-motivated [LAUGHS] by finding bugs in software!HUIZINGA: Interesting.LU: And then she said, you are really motivated, right, by, you know, getting help to developers, to help developers finding bugs in their software, so maybe thats the research project for you. So thats how I got started.HUIZINGA: Well, lets go a little bit further on this mentor and mentors in general. As Dr. Seuss might say, every what has a who. So by that I mean an inspirational person or people behind every successful researchers career. And most often, theyre kind of big names and meaningful relationships, but you have another unique story on who has influenced you in your career, so why dont you tell us about the spectrum of people whove been influential in your life and your career?LU: Mm-hmm. Yeah, I mean, I think I mentioned my adviser, and shes just so supportive. And I remember, when I started doing research, I just felt like I seemed to be so far behind everyone else. You know, I felt like, how come everybody else knows how to ask, you know, insightful questions? And they, like, they know how to program really fast, bug free. And my adviser really encouraged me, saying, you know, there are background knowledge that you can pick up; you just need to be patient. But then there are also, like, you know how to do research, you know how to think about things, problem solving. And she encouraged me saying, Shan, youre good at that!HUIZINGA: Interesting!LU: Well, I dont know how she found out, and anyway, so she was super, super helpful.HUIZINGA: OK, so go a little further on this because I know you have others that have influence you, as well.LU: Yes. Yes, yes. And I think those, to be honest, Im a very emotional, sensitive person. I would just, you know, move the timeline to be, kind of, more recent. So I joined Microsoft Research as a manager, and theres something called Connect that, you know, people write down twice every year talking about what it is theyve been doing. So I was just checking, you know, my members in my team to see what they have been doing over the years just to just get myself familiar with them. And I remember I read several of them. I felt like I almost have tears in my eyes! Like, I realized, wow, like And just to give example, for Chris, Chris Hawblitzel, I read his Connect, and I saw that hes working on something called program verification. Its a very, very difficult problem, and [as an] outsider, you know, Ive read many of his papers, but when I read, you know, his own writing, and I realized, wow, you know, its almost two decades, right. Like, he just keeps doing these very difficult things. And I read his words about, you know, how his old approach has problems, how hes thinking about how to address that problem. Oh, I have an idea, right. And then spend multiple years to implement that idea and get improvement; find a new problem and then just find new solutions. And I really feel like, wow, Im really, really, like, I feel like this is, kind of, like a, you know, theres, how to say, a hero-ish story behind this, you know, this kind of goal, and youre willing to spend many years to keep tackling this challenging problem. And I just feel like, wow, Im so honored, you know, to be in the same group with a group of fighters, you know, determined to tackle difficult research problems.HUIZINGA: Yeah. And I think when you talk about it, its like this is a person that was working for you, a direct report. [LAUGHTER] And often, we think about our heroes as being the ones who mentored us, who taught us, who managed us, but yours is kind of 360! Its like LU: True!HUIZINGA: your heroes [are] above, beside and below.LU: Right. And I would just say that I have many other, you know, direct reports in my group, and I have, you know, for example, say a couple other my colleagues, my direct reports, Dan Ports and Jacob Nelson. And again, this is something like their story really inspired me. Like, they were, again, spent five or six years on something, and it looks like, oh, its close to the success of tech transfer, and then something out of their control happened. It happened because Intel decided to stop manufacturing a chip that their research relied on. And its, kind of, like the end of the world to them, HUIZINGA: Yeah.LU: and then they did not give up. And then, you know, like, one year later, they found a solution, you know, together with their product team collaborators.HUIZINGA: Wow.LU: And I still feel like, wow, you know, I feel so I feel like Im inspired every day! Like, Im so happy to be working together with, you know, all these great people, great researchers in my team.HUIZINGA: Yeah. Wow. So much of your work centers on this idea of concurrent systems and I want you to talk about some specific examples of this work next, but I think it warrants a little explication upfront for those people in the audience who dont spend all their time working on concurrent systems themselves. So give us a short 101 on concurrent systems and explain why the work you do matters to both the people who make it and the people who use it.LU: Sure. Yeah. So I think a lot of people may not realize so actually, the software were using every day, almost every software we use these days are concurrent. So the meaning of concurrent is that you have multiple threads of execution going on at the same time, in parallel. And then, when we go to a web browser, right, so its not just one rendering that is going on. There are actually multiple concurrent renderings that is going on. So the problem of writing for software developers to develop this type of concurrent system, a challenge is the timing. So because you have multiple concurrent things going on, its very difficult to manage and reason about, you know, what may happen first, what may happen second. And also, its, like, theres an inherent non-determinism in it. What happened first this time may happen second next time. So as a result, a lot of bugs are introduced by this. And it was a very challenging problem because I would say about 20 years ago, there was a shift. Like, in the older days, actually most of our software is written in a sequential way instead of a concurrent way. So, you know, a lot of developers also have a difficult time to shift their mindset from the sequential way of reasoning to this concurrent way of reasoning.HUIZINGA: Right. Well, and I think, from a users perspective, all you experience is what I like to call the spinning beachball of doom. Its like Ive asked something, and it doesnt want to give, so [LAUGHS] And this is, like, behind the scenes from a reasoning perspective of, how do we keep that from happening to our users? How do we identify the bugs? Which well get to in a second. Umm. Thanks for that. Your research now revolves around what I would call the big idea of learning from mistakes. And in fact, it all seems to have started with a paper that you published way back in 2008 called Learning from Mistakes: A Comprehensive Study on Real World Concurrency Bug Characteristics, and you say this strongly influenced your research style and approach. And by the way, Ill note that this paper received the Most Influential Paper Award in 2022 from ASPLOS, which is the Architectural Support for Programming Languages and Operating Systems. Huge mouthful. And it also has more than a thousand citations, so I dare say its influenced other researchers approach to research, as well. Talk about the big idea behind this paper and exactly how it informed your research style and approach today.LU: Mm-hmm. Yeah. So I think this, like, again, went back to the days that I, you know, my PhD days, I started working with my adviser, you know, YY (Yuanyuan Zhou). So at that time, there had been a lot of people working on bug finding, but then now when I think about it, people just magically say, hey, I want to look at this type of bug. Just magically, oh, I want to look at that type of bug. And then, my adviser at that time suggested to me, saying, hey, maybe, you know, actually take a look, right. At that time, as I mentioned, software was kind of shifting from sequential software to concurrent software, and my adviser was saying, hey, just take a look at those real systems bug databases, and see what type of concurrency bugs are actually there. You know, instead of just randomly saying, oh, I want to work on this type of bug.HUIZINGA: Oh, yeah.LU: And then also, of course, its not just look at it. Its not just like you read a novel or something, right. [LAUGHTER] And again, my adviser said, hey, Shan, right, you have this, you have a connection, natural connection, you know, with bugs and the developers who commit HUIZINGA: Who make them LU: Who make them! [LAUGHTER] So she said, you know, try to think about the patterns behind them, right. Try to think about whether you can generalize some HUIZINGA: Interesting LU: characteristics, and use that to guide peoples research in this domain. And at that time, we were actually thinking we dont know whether, you know, we can actually write a paper about it because traditionally you publish a paper, just say, oh, I have a new tool, right, which can do this and that. At that time in system conferences, people rarely have, you know, just say, heres a study, right. But we studied that, and indeed, you know, I had this thought that, hey, why I make a lot of mistakes. And when I study a lot of bugs, the more and more, I feel, you know, theres a reason behind it, right. Its like Im not the only dumb person in the world, right? [LAUGHTER] Theres a reason that, you know, theres some part of this language is difficult to use, right, and theres a certain type of concurrent reasoning, its just not natural to many people, right. So because of that, there are patterns behind these bugs. And so at that time, we were surprised that the paper was actually accepted. Because Im just happy with the learning I get. But after this paper was accepted, in the next, I would say, many years, there are more and more people realize, hey, before we actually, you know, do bug-finding things, lets first do a study, right, to understand, and then this paper was yeah I was very happy that it was cited many, many times.HUIZINGA: Yeah. And then gets the most influential paper many years later.LU: Many years later. Yes.HUIZINGA: Yeah, I feel like theres a lot of things going through my head right now, one of which is what AI is, is a pattern detector, and you were doing that before AI even came on the scene. Which goes to show you that humans are pretty good at pattern detection also. We might not do as fast as LU: True.HUIZINGA: as an AI but so this idea of learning from mistakes is a broad theme. Another theme that I see coming through your papers and your work is persistence. [LAUGHTER] And you mentioned this about your team, right. I was like, these people are people who dont give up. So we covered this idea in an Abstracts podcast recently talking about a paper which really brings this to light: If at First You Dont Succeed, Try, Try Again. Thats the name of the paper. And we didnt have time to discuss it in depth at the time because the Abstracts show is so quick. But we do now. So Id like you to expand a little bit on this big idea of persistence and how large language models are not only changing the way programming and verification happens but also providing insights into detecting retry bugs.LU: Yes. So I guess maybe I will, since you mentioned this persistence, you know, after that Learning from Mistakes paperso that was in 2008and in the next 10 years, a little bit more than 10 years, in terms of persistence, right, so we have continued, me and my students, my collaborators, we have continued working on, you know, finding concurrency bugs HUIZINGA: Yeah.LU: which is related to, kind of related to, why Im here at Microsoft Research. And we keep doing it, doing it, and then I feel like a high point was that I had a collaboration with my now colleagues here, Madan Musuvathi and Suman Nath. So we built a tool to detect concurrency bugs, and after more than 15 years of effort on this, we were able to find more than 1,000 concurrency bugs. It was built in a tool called Torch that was deployed in the company, and it won the Best Paper Award at the top system conference, SOSP, and it was actually a bittersweet moment. This paper seems to, you know, put an end HUIZINGA: Oh, interesting!LU: to our research. And also some of the findings from that paper is that we used to do very sophisticated program analysis to reason about the timing. And in that paper, we realized actually, sometimes, if youre a little bit fuzzy, dont aim to do perfect analysis, the resulting tool is actually more effective. So after that paper, Madan, Suman, and me, we kind of, you know, shifted our focus to looking at other types of bugs. And at the same time, the three of us realized the traditional, very precise program analysis may not be needed for some of the bug finding. So then, for this paper, this retry bugs, after we shifted our focus away from concurrency bugs, we realized, oh, there are many other types of important bugs, such as, in this case, like retry, right, when your software goes wrong, right. Another thing we learned is that it looks like you can never eliminate all bugs, so something will go wrong, [LAUGHTER] and then so thats why you need something like retry, right. So like if something goes wrong, at least you wont give up immediately.HUIZINGA: Right.LU: The software will retry. And another thing that started from this earlier effort is we started using large language models because we realized, yeah, you know, traditional program analysis sometimes can give you a very strong guarantee, but in some other cases, like in this retry case, some kind of fuzzy analysis, you know, not so precise, offered by large language models is sometimes even more beneficial. Yeah. So thats kind of, you know, the story behind this paper.HUIZINGA: Yeah, yeah, yeah, yeah. So, Shan, were hearing a lot about how large language models are writing code nowadays. In fact, NVIDIAs CEO says, mamas, dont let your babies grow up to be coders because AIs going to do that. I dont know if hes right, but one of the projects youre most excited about right now is called Verus, and your colleague Jay Lorch recently said that he sees a lot of synergy between AI and verification, where each discipline brings something to the other, and Rafah Hosn has referred to this as co-innovation or bidirectional enrichment. I dont know if thats exactly what is going on here, but it seems like it is. Tell us more about this project, Verus, and how AI and software verification are helping each other out.LU: Yes, yes, yes, yes. Im very excited about this project now! So first of all, starting from Verus. So Verus is a tool that helps you verify the correctness of Rust code. So this is a its a relatively new tool, but its creating a lot of, you know, excitement in the research community, and its created by my colleague Chris Hawblitzel and his collaborators outside Microsoft Research.HUIZINGA: Interesting.LU: And as I mentioned, right, this is a part that, you know, really inspired me. So traditionally to verify, right, your program is correct, it requires a lot of expertise. You actually have to write your proof typically in a special language. And, you know, so a lot of people, including me, right, who are so eager to get rid of bugs in my software, but there are people told me, saying just to learn that languageso they were referring to a language called Coqjust to learn that language, they said it takes one or two years. And then once you learn that language, right, then you have to learn about how to write proofs in that special language. So people, particularly in the bug-finding community, people know that, oh, in theory, you can verify it, but in reality, people dont do that. OK, so now going back to this Verus tool, why its exciting so it actually allows people to write proofs in Rust. So Rust is an increasingly popular language. And there are more and more people picking up Rust. Its the first time I heard about, oh, you can, you know, write proofs in a popular language. And also, another thing is in the past, you cannot verify an implementation directly. You can only verify something written in a special language. And the proof is proving something that is in a special language. And then finally, that special language is maybe then transformed into an implementation. So its just, theres just too many special languages there.HUIZINGA: A lot of layers.LU: A lot of layers. So now this Verus tool allows you to write a proof in Rust to prove an implementation that is in Rust. So its very direct. I just feel like Im just not good at learning a new language.HUIZINGA: Interesting.LU: So when I came here, you know, and learned about this Verus tool, you know, by Chris and his collaborators, I feel like, oh, looks like maybe I can give it a try. And surprisingly, I realized, oh, wow! I can actually write proofs using this Verus tool.HUIZINGA: Right.LU: And then, of course, you know, I was told, if you really want to, right, write proofs for large systems, it still takes a lot of effort. And then this idea came to me that, hey, maybe, you know, these days, like, large language models can write code, then why not let large language models write proofs, right? And of course, you know, other people actually had this idea, as well, but theres a doubt that, you know, can large language models really write proofs, right? And also, people have this feeling that, you know, large language models seem not very disciplined, you know, by nature. But, you know, thats what intrigued me, right. And also, I used to be a doubter for, say, GitHub Copilot. USED to! Because I feel like, yes, it can generate a lot of code, but who knows [LAUGHS] HUIZINGA: Whether its right LU: What, what is whether its right?HUIZINGA: Yeah.LU: Right, so I feel like, wow, you know, this could be a game-changer, right? Like, if AI can write not only code but also proofs. Yeah, so thats what I have been doing. Ive been working on this for one year, and I gradually get more collaborators both, you know, people in Microsoft Research Asia, and, you know, expertise here, like Chris, and Jay Lorch. They all help me a lot. So we actually have made a lot of progress.HUIZINGA: Yeah.LU: Like, now its, like, weve tried, like, for example, for some small programs, benchmarks, and we see that actually large language models can correctly prove the majority of the benchmarks that we throw to it. Yeah. Its very, very exciting.HUIZINGA: Well, and so and were going to talk a little bit more about some of those doubts and some of those interesting concerns in a bit. I do want you to address what I think Jay was getting at, which is that somehow the two help each other. The verification improves the AI. The AI improves the verification.LU: Yes, yes.HUIZINGA: How?LU: Yes. My feeling is that a lot of people, if theyre concerned with using AI, its because they feel like theres no guarantee for the content generated by AI, right. And then we also all heard about, you know, hallucination. And I tried myself. Like, I remember, at some point, if I ask AI, say, you know, which is bigger: is it three times three or eight? And the AI will tell me eight is bigger. And [LAUGHTER]HUIZINGA: Like, what?LU: So I feel like verification can really help AI HUIZINGA: Get better LU: because now you can give, you know, kind of, add in mathematical rigors into whatever that is generated by AI, right. And I say it would help AI. It will also help people who use AI, right, so that they know what can be trusted, right.HUIZINGA: Right.LU: What is guaranteed by this content generated by AI?HUIZINGA: Yeah, yeah, yeah.LU: Yeah, and now of course AI can help verification because, you know, verification, you know, its hard. There is a lot of mathematical reasoning behind it. [LAUGHS] And so now with AI, it will enable verification to be picked up by more and more developers so that we can get higher-quality software.HUIZINGA: Yeah.LU: Yeah.HUIZINGA: Yeah. And well get to that, too, about what I would call the democratization of things. But before that, I want to, again, say an observation that I had based on your work and my conversations with you is that youve basically dedicated your career to hunting bugs.LU: Yes.HUIZINGA: And maybe thats partly due to a personal story about how a tiny mistake became a bug that haunted you for years. Tell us the story.LU: Yes.HUIZINGA: And explain why and how it launched a lifelong quest to understand, detect, and expose bugs of all kinds.LU: Yes. So before I came here, I already had multiple times, you know, interacting with Microsoft Research. So I was a summer intern at Microsoft Research Redmond almost 20 years ago.HUIZINGA: Oh, wow!LU: I think it was in the summer of 2005. And I remember I came here, you know, full of ambition. And I thought, OK, you know, I will implement some smart algorithm. I will deliver some useful tools. So at that time, I had just finished two years of my PhD, so I, kind of, just started my research on bug finding and so on. And I remember I came here, and I was told that I need to program in C#. And, you know, I just naturally have a fear of learning a new language. But anyway, I remember, I thought, oh, the task I was assigned was very straightforward. And I think I went ahead of myself. I was thinking, oh, I want to quickly finish this, and I want to do something more novel, you know, that can be more creative. But then this simple task I was assigned, I ended up spending the whole summer on it. So the tool that I wrote was supposed to process very huge logs. And then the problem is my software is, like, you run it initially So, like, I can only run it for 10 minutes because my software used so much memory and it will crash. And then, I spent a lot of time I was thinking, oh, my software is just using too much memory. Let me optimize it, right. And then so, I, you know, I try to make sure to use memory in a very efficient way, but then as a result, instead of crashing every 10 minutes, it will just crash after one hour. And I know theres a bug at that time. So theres a type of bug called memory leak. I know theres a bug in my code, and I spent a lot of time and there was an engineer helping me checking my code. We spent a lot of time. We were just not able to find that bug. And at the end, we the solution is I was just sitting in front of my computer waiting for my program to crash and restart. [LAUGHTER] And at that time, because there was very little remote working option, so in order to finish processing all those logs, its like, you know, after dinner, I HUIZINGA: You have to stay all night!LU: I have to stay all night! And all my intern friends, they were saying, oh, Shan, you work really hard! And Im just feeling like, you know what Im doing is just sitting in front of my computer waiting [LAUGHTER]for my program to crash so that I can restart it! And near the end of my internship, I finally find the bug. It turns out that I missed a pair of brackets in one line of code.HUIZINGA: Thats it.LU: Thats it.HUIZINGA: Oh, my goodness.LU: And it turns out, because I was used to C, and in C, when you want to free, which means deallocate, an array, you just say free array. And if I remember correctly, in this language, C#, you have to say, free this array name and you put a bracket behind it. Otherwise, it will only free the first element. And I it was a nightmare. And I also felt like, the most frustrating thing is, if its a clever bug, right [LAUGHS]HUIZINGA: Sure.LU: then you feel like at least Im defeated by something complicated HUIZINGA: Smart.LU: Something smart. And then its like, you know, also all this ambition I had about, you know, doing creative work, right, with all these smart researchers in MSR (Microsoft Research), I feel like I ended up achieving very little in my summer internship.HUIZINGA: But maybe the humility of making a stupid mistake is the kind of thing that somebody whos good at hunting bugs Its like missing an error in the headline of an article, because the print is so big [LAUGHTER] that youre looking for the little things in the I know thats a journalists problem. Actually, I actually love that story. And it, kind of, presents a big picture of you, Shan, as a person who has a realistic, self-awareness of and humility, which I think is rare at times in the software world. So thanks for sharing that. So moving on. When we talked before, you mentioned the large variety of programming languages and how that can be a barrier to entry or at least a big hurdle to overcome in software programming and verification. But you also talked about, as we just mentioned, how LLMs have been a democratizing force LU: Yes.HUIZINGA: LU: Yes.HUIZINGA: and what you see now with the advent of tools like GitHub Copilot, LU: Yes.HUIZINGA: what whats changed?LU: Oh, so much has changed. Well, I dont even know how to start. Like, I used to be really scared about programming. You know, when I tell this story, a lot of people say, no, I dont believe you. And I feel like its a trauma, you know.HUIZINGA: Sure.LU: I almost feel like its like, you know, the college-day me, right, who was scared of starting any programming project. Somehow, I felt humiliated when asking those very, I feel like, stupid questions to my classmates. It almost changed my personality! Its like for a long time, whenever someone introduced me to a new software tool, my first reaction is, uh, I probably will not be able to successfully even install it. Like whenever, you know, theres a new language, my first reaction is, uh, no, Im not good at it. And then, like, for example, this GitHub Copilot thing, actually, I did not try it until I joined Microsoft. And then I, actually, I havent programmed for a long time. And then I started collaborating with people in Microsoft Research Asia, and he writes programs in Python, right. And I have never written a single line of Python code before. And also, this Verus tool. It helps you to verify code in Rust, but I have never learned Rust before. So I thought, OK, maybe let me just try GitHub Copilot. And wow! You know, its like I realized, wow! Like [LAUGHS]HUIZINGA: I can do this!LU: I can do this! And, of course, sometimes I feel like my colleagues may sometimes be surprised because on one hand it looks like Im able to just finish, you know, write a Rust function. But on some other days, I ask very basic questions, [LAUGHTER] and I have those questions because, you know, the GitHub Copilot just helps me finish! [LAUGHS]HUIZINGA: Right.LU: You know, Im just starting something to start it, and then it just helps me finish. And I wish, when I started my college, if at that time there was GitHub Copilot, I feel like, you know, my mindset towards programming and towards computer science might be different. So it does make me feel very positive, you know, about, you know, what future we have, you know, with AI, with computer science.HUIZINGA: OK, usually, I ask researchers at this time, what could possibly go wrong if you got everything right? And I was thinking about this question in a different way until just this minute. I want to ask you what do you think that it means to have a tool that can do things for you that you dont have to struggle with? And maybe, is there anything good about the struggle? Because youre framing it as it sapped your confidence.LU: [LAUGHS] Yes.HUIZINGA: And at the same time, I see a woman who emerged stronger because of this struggle with an amazing career, a huge list of publications, influential papers, citations, leadership role. [LAUGHTER] So in light of that LU: Right.HUIZINGA: what do you see as the tension between struggling to learn a new language versus having this tool that can just do it that makes you look amazing? And maybe the truth of it is you dont know!LU: Yeah. Thats a very good point. I guess you need some kind of balance. And on one hand, yes, I feel like, again, right, this goes back to like my internship. I left with the frustration that I felt like I have so much creativity to contribute, and yet I could not because of this language barrier. You know, I feel positive in the sense that just from GitHub Copilot, right, how it has enabled me to just bravely try something new. I feel like this goes beyond just computer science, right. I can imagine itll help people to truly unleash their creativity, not being bothered by some challenges in learning the tool. But on the other hand, you made a very good point. My adviser told me she feels like, you know, I write code slowly, but I tend to make fewer mistakes. And the difficulty of learning, right, and all these nightmares I had definitely made me more more cautious? I pay more respect to the task that is given to me, so there is definitely the other side of AI, right, which is, you feel like everything is easy and maybe you do not have the experience of those bugs, right, that a software can bring to you and you have overreliance, right, on this tool.HUIZINGA: Yeah!LU: So hopefully, you know, some of the things we were doing now, right, like for example, say verification, right, like bringing this mathematical rigor to AI, hopefully that can help.HUIZINGA: Yeah. You know, even as you unpack the nuances there, it strikes me that both are good. Both having to struggle and learning languages and understanding LU: Yeah.HUIZINGA: the core of it and the idea that in natural language, you could just say, heres what I want to happen, and the AI does the code, the verification, etc. That said, do we trust it? And this was where I was going with the first what could possibly go wrong? question. How do we know that it is really as clever as it appears to be? [LAUGHS]LU: Yeah, I think I would just use the research problem we are working on now, right. Like, I think on one hand, I can use AI to generate a proof, right, to prove the code generated by AI is correct. But having said that, even if were wildly successful, you know, in this thing, human beings expertise is still needed because just take this as an example. What do you mean by correct, right?HUIZINGA: Sure.LU: And so someone first has to define what correctness means. And then so far, the experience shows that you cant just define it using natural language because our natural language is inherently imprecise.HUIZINGA: Sure.LU: So you still need to translate it to a formal specification in a programming language. It could be in a popular language like in Rust, right, which is what Verus is aiming at. And then we are, like, for example, some of the research we do is showing that, yes, you know, I can also use AI to do this translation from natural language to specification. But again, then, who to verify that, right? So at the end of the day, I think we still do need to have humans in the loop. But what we can do is to lower the burden and make the interface not so complicated, right. So that itll be easy for human beings to check what AI has been doing.HUIZINGA: Yeah. You know, everything were talking about just reinforces this idea that were living in a time where the advances in computer science that seemed unrealistic or impossible, unattainable even a few years ago are now so common that we take it for granted. And they dont even seem outrageous, but they are. So Im interested to know what, if anything, you would classify now as blue sky research in your field. Maybe something in systems research today that looks like a moonshot. Youve actually anchored this in the fact that you, kind of, have, you know, blinders on for the work youre doinghead down in the in the work youre doingbut even as you peek up from the work that might be outrageous, is there anything else? I just like to get this out there that, you know, whats going on 10 years down the line?LU: You know, sometimes I feel like Im just now so much into my own work, but, you know, occasionally, like, say, when I had a chat with my daughter and I explained to her, you know, oh, Im working on, you know, not only having AI to generate code but also having AI to prove, right, the code is correct. And she would feel, wow, that sounds amazing! [LAUGHS] So I dont know whether that is, you know, a moonshot thing, but thats a thing that Im super excited about HUIZINGA: Yeah.LU: about the potential. And then there also have, you know, my colleagues, we spend a lot of time building systems, and its not just about correctness, right. Like, the verification thing Im doing now is related to automatically verify its correct. But also, you need to do a lot of performance tuning, right. Just so that your system can react fast, right. It can have good utilization of computer resources. And my colleagues are also working on using AI, right, to automatically do performance tuning. And I know what they are doing, so I dont particularly feel thats a moonshot, but I guess HUIZINGA: I feel like, because you are so immersed, [LAUGHTER] that you just dont see how much we think LU: Yeah!HUIZINGA: its amazing. Well, Im just delighted to talk to you today, Shan. As we close and youve sort of just done a little vision casting, but lets take your daughter, my daughter,[LAUGHTER] all of our daughters LU: Yes!HUIZINGA: How does what we believe about the future in terms of these things that we could accomplish influence the work we do today as sort of a vision casting for the next Shan Lu whos struggling in undergrad/grad school?LU: Yes, yes, yes. Oh, thank you for asking that question. Yeah, I have to say, you know, I think were in a very interesting time, right, with all this AI thing.HUIZINGA: Isnt that a curse in China? May you live in interesting times!LU: And I think there were times, actually, you know, before I myself fully embraced AI, I was indeed I had my daughter in mind. I was worried when she grows up, what would happen? There will be no job for her because everything will be done by AI!HUIZINGA: Oh, interesting.LU: But then now, now that I have, you know, kind of fully embraced AI myself, actually, I see this more and more positive. Like you said, I remember, you know, those older days myself, right. That is really, like, I have this struggle that I feel like I can do better. I feel like I have ideas to contribute, but just for whatever reason, right, it took me forever to learn something which I feel like its a very mechanical thing, but it just takes me forever to learn, right. And then now actually, I see this hope, right, with AI, you know, a lot of mechanical things that can actually now be done in a much more automated way by AI, right. So then now truly, you know, my daughter, many girls, many kids out there, right, whatever you know, they are good at, their creativity, itll be much easier, right, for them to contribute their creativity to whatever discipline they are passionate about. Hopefully, they dont have to, you know, go through what I went through, right, to finally be able to contribute. But then, of course, you know, at the same time, I do feel this responsibility of me, my colleagues, MSR, we have the capability and also the responsibility, right, of building AI tools in a responsible way so that it will be used in a positive way by the next generation.HUIZINGA: Yeah. Shan Lu, thank you so much for coming on the show today. [MUSIC] Its been absolutely delightful, instructive, informative, wonderful.LU: Thank you. My pleasure.
    0 Commentaires ·0 Parts ·193 Vue
  • Research Focus: Week of January 13, 2025
    www.microsoft.com
    In this edition:We introduce privacy enhancements for multiparty deep learning, a framework using smaller, open-source models to provide relevance judgments, and other notable new research.We congratulate Yasuyuki Matsushita, who was named an IEEE Computer Society Fellow.Weve included a recap of the extraordinary, far-reaching work done by researchers at Microsoft in 2024.NEW RESEARCHAI meets materials discoveryTwo of the transformative tools that play a central role in Microsofts work on AI for science are MatterGen and MatterSim. In the world of materials discovery, each plays a distinct yet complementary role in reshaping how researchers design and validate new materials.Read the storyNEW RESEARCHDistributed training enables multiple parties to jointly train a machine learning model on their respective datasets, which can help address the challenges posed by requirements in modern machine learning for large volumes of diverse data. However, this can raise security and privacy issues protecting each partys data during training and preventing leakage of private information from the model after training through various inference attacks.In a recent paper, Communication Efficient Secure and Private Multi-Party Deep Learning, researchers from Microsoft address these concerns simultaneously by designing efficient Differentially Private, secure Multiparty Computation (DP-MPC) protocols for jointly training a model on data distributed among multiple parties. This DP-MPC protocol in the two-party setting is 56-to-794 times more communication-efficient and 16-to-182 times faster than previous such protocols. This work simplifies and improves on previous attempts to combine techniques from secure multiparty computation and differential privacy, especially in the context of training machine learning models.Read the paperNEW RESEARCHTraining and evaluating retrieval systems requires significant relevance judgments, which are traditionally collected from human assessors. This process is both costly and time-consuming. Large language models (LLMs) have shown promise in generating relevance labels for search tasks, offering a potential alternative to manual assessments. Current approaches often rely on a single LLM. While effective, this approach can be expensive and prone to intra-model biases that can favor systems leveraging similar models.In a recent paper: JudgeBlender: Ensembling Judgments for Automatic Relevance Assessment, researchers from Microsoft we introduce a framework that employs smaller, open-source models to provide relevance judgments by combining evaluations across multiple LLMs (LLMBlender) or multiple prompts (PromptBlender). By leveraging the LLMJudge benchmark, they compare JudgeBlender with state-of-the-art methods and the top performers in the LLMJudge challenge. This research shows that JudgeBlender achieves competitive performance, demonstrating that very large models are often unnecessary for reliable relevance assessments.Read the paperNEW RESEARCHCongestion games are used to describe the behavior of agents who share a set of resources. Each player chooses a combination of resources, which may become congested, decreasing utility for the players who choose them. Players can avoid congestion by choosing combinations that are less popular. This is useful for modeling a range of real-world scenarios, such as traffic flow, data routing, and wireless communication networks.In a recent paper: Convergence to Equilibrium of No-regret Dynamics in Congestion Games; researchers from Microsoft and external colleagues propose CongestEXP, a decentralized algorithm based on the classic exponential weights method. They evaluate CongestEXP in a traffic congestion game setting. As more drivers use a particular route, congestion increases, leading to higher travel times and lower utility. Players can choose a different route every day to optimize their utility, but the observed utility by each player may be subject to randomness due to uncertainty (e.g., bad weather). The researchers show that this approach provides both regret guarantees and convergence to Nash Equilibrium, where no player can unilaterally improve their outcome by changing their strategy.Read the paperNEW RESEARCHResearch and development (R&D) plays a pivotal role in boosting industrial productivity. However, the rapid advance of AI has exposed the limitations of traditional R&D automation. Current methods often lack the intelligence needed to support innovative research and complex development tasks, underperforming human experts with deep knowledge.LLMs trained on vast datasets spanning many subjects are equipped with extensive knowledge and reasoning capabilities that support complex decision-making in diverse workflows. By autonomously performing tasks and analyzing data, LLMs can significantly increase the efficiency and precision of R&D processes.In a recent article, researchers from Microsoft introduce RD-Agent, a tool that integrates data-driven R&D systems and harnesses advanced AI to automate innovation and development.At the heart of RD-Agent is an autonomous agent framework with two key components: a) Research and b) Development. Research focuses on actively exploring and generating new ideas, while Development implements these ideas. Both components improve through an iterative process, illustrated in Figure 1 of the article, ensures the system becomes increasingly effective over time.Read the articleSpotlight: Microsoft research newsletterMicrosoft Research NewsletterStay connected to the research community at Microsoft.Subscribe todayOpens in a new tab Microsoft Research | In case you missed itMicrosoft Research 2024: A year in reviewDecember 20, 2024Microsoft Research did extraordinary work this year, using AI and scientific research to make progress on real-world challenges like climate change, food security, global health, and human trafficking. Heres a look back at the broad range of accomplishments and advances in 2024. AIOpsLab: Building AI agents for autonomous cloudsDecember 20, 2024AIOpsLab is a holistic evaluation framework for researchers and developers, to enable the design, development, evaluation, and enhancement of AIOps agents, which also serves the purpose of reproducible, standardized, interoperable, and scalable benchmarks. Yasuyuki Matsushita, IEEE Computer Society 2025 FellowDecember 19, 2024Congratulations to Yasuyuki Matsushita, Senior Principal Research Manager at Microsoft Research, who was named a 2025 IEEE Computer Society Fellow. Matsushita was recognized for contributions to photometric 3D modeling and computational photography. View more news and awards Opens in a new tab
    0 Commentaires ·0 Parts ·216 Vue
  • MatterGen: A new paradigm of materials design with generative AI
    www.microsoft.com
    Materials innovation is one of the key drivers of major technological breakthroughs. The discovery of lithium cobalt oxide in the 1980s laid the groundwork for todays lithium-ion battery technology. It now powers modern mobile phones and electric cars, impacting the daily lives of billions of people. Materials innovation is also required for designing more efficient solar cells, cheaper batteries for grid-level energy storage, and adsorbents to recycle CO2 from atmosphere.Finding a new material for a target application is like finding a needle in a haystack. Historically, this task has been done via expensive and time-consuming experimental trial-and-error. More recently, computational screening of large materials databases has allowed researchers to speed up this process. Nonetheless, finding the few materials with the desired properties still requires the screening of millions of candidates.Today, in a paper published in Nature (opens in new tab), we share MatterGen, a generative AI tool that tackles materials discovery from a different angle. Instead of screening the candidates, it directly generates novel materials given prompts of the design requirements for an application. It can generate materials with desired chemistry, mechanical, electronic, or magnetic properties, as well as combinations of different constraints. MatterGen enables a new paradigm of generative AI-assistedmaterials design that allows for efficient exploration of materials, going beyond the limited set of known ones.Figure 1: Schematic representation of screening and generative approaches to materials designA novel diffusion architectureMatterGen is a diffusion model that operates on the 3D geometry of materials. Much like an image diffusion model generates pictures from a text prompt by modifying the color of pixels from a noisy image, MatterGen generates proposed structures by adjusting the positions, elements, and periodic lattice from a random structure. The diffusion architecture is specifically designed for materials to handle specialties like periodicity and 3D geometry.Figure 2: Schematic representation of MatterGen: a diffusion model to generate novel and stable materials. MatterGen can be fine-tuned to generate materials under different design requirements such as specific chemistry, crystal symmetry, or materials properties.The base model of MatterGen achieves state-of-the-art performance in generating novel, stable, diverse materials (Figure 3). It is trained on 608,000 stable materials from the Materials Project (opens in new tab) (MP) and Alexandria (opens in new tab) (Alex) databases. The performance improvement can be attributed to both the architecture advancements, as well asthe quality and size of our training data.Figure 3: Performance of MatterGen and other methods in the generation of stable, unique, and novel structures. The training dataset for each method is indicated in parentheses. The purple bar highlights performance improvements due to MatterGens architecture alone, while the teal bar highlights performance improvements that come also from the larger training dataset.MatterGen can be fine-tuned with a labelled dataset to generate novel materials given any desired conditions. We demonstrate examples of generating novel materials given a targets chemistry and symmetry, as well as electronic, magnetic, and mechanical property constraints (Figure 2). Outperforming screeningFigure 4: Performance of MatterGen (teal) and traditional screening (yellow) in finding novel, stable, and unique structures that satisfy the design requirement of having bulk modulus greater than 400 GPa.The key advantage of MatterGen over screening is its ability to access the full space of unknown materials. In Figure 4, we show that MatterGen continues to generate more novel candidate materials with high bulk modulus above 400 GPa, for example, which are hard to compress. In contrast, screening baseline saturates due to exhausting known candidates.Spotlight: Blog postMedFuzz: Exploring the robustness of LLMs on medical challenge problemsMedfuzz tests LLMs by breaking benchmark assumptions, exposing vulnerabilities to bolster real-world accuracy.Read moreOpens in a new tab Handling compositional disorderFigure 5: Illustration of compositional disorder. Left: a perfect crystal without compositional disorder and with a repeating unit cell (black dashed). Right: crystal with compositional disorder, where each site has 50% probability of yellow and teal atoms.Compositional disorder (Figure 5) is a commonly observed phenomenon where different atoms can randomly swap their crystallographic sites in a synthesized material. Recently (opens in new tab), the community has been exploring what it means for a material to be novel in the context of computationally designed materials, as widely employed algorithms will not distinguish between pairs of structures where the only difference is a permutation of similar elements in their respective sites.We provide an initial solution to this issue by introducing a new structure matching algorithm that considers compositional disorder. The algorithm assesses whether a pair of structures can be identified as ordered approximations of the same underlying compositionally disordered structure. This provides a new definition of novelty and uniqueness, which we adopt in our computational evaluation metrics. We also make our algorithm publicly available (opens in new tab) as part of our evaluation package.Experimental lab verificationFigure 6: Experimental validation of the proposed compound, TaCr2O6In addition to our extensive computational evaluation, we have validated MatterGens capabilities through experimental synthesis. In collaboration with the team led by Prof Li Wenjie from the Shenzhen Institutes of Advanced Technology (opens in new tab) (SIAT) of the Chinese Academy of Sciences, we have synthesized a novel material, TaCr2O6, whose structure was generated by MatterGen after conditioning the model on a bulk modulus value of 200 GPa. The synthesized materials structure aligns with the one proposed by MatterGen, with the caveat of compositional disorder between Ta and Cr. Additionally, we experimentally measure a bulk modulus of 169 GPa against the 200 GPa given as design specification, with a relative error below 20%, very close from an experimental perspective. If similar results can be translated to other domains, it will have a profound impact on the design of batteries, fuel cells, and more. AI emulator and generator flywheelMatterGen presents a new opportunity for AI accelerated materials design, complementing our AI emulator MatterSim. MatterSim follows the fifth paradigm of scientific discovery, significantly accelerating the speed of material properties simulations. MatterGen in turn accelerates the speed of exploring new material candidates with property guided generation. MatterGen and MatterSim can work together as a flywheel to speed up both the simulation and exploration of novel materials.Making MatterGen availableWe believe the best way to make an impact in materials design is to make our model available to the public. We release the source code of MatterGen (opens in new tab) under the MIT license, together with the training and fine-tuning data. We welcome the community to use and build on top of our model.MatterGen represents a new paradigm of materials design enabled by generative AI technology. It explores a significantly larger space of materials than screening-based methods. It is also more efficient by guiding materials exploration with prompts. Similar to how generative AI has impacted drug discovery (opens in new tab), it will have profound impact on how we design materials in broad domains including batteries, magnets, and fuel cells.We plan to continue our work with external collaborators to further develop and validate the technology. At the Johns Hopkins University Applied Physics Laboratory (APL), were dedicated to the exploration of tools with the potential to advance discovery of novel, mission-enabling materials. Thats why we are interested in understanding the impact that MatterGen could have on materials discovery, said Christopher Stiles, a computational materials scientists leading multiple materials discovery efforts at APL.AcknowledgementThis work is the result of highly collaborative team efforts at Microsoft Research AI for Science. The full authors include: Claudio Zeni, Robert Pinsler, Daniel Zgner, Andrew Fowler, Matthew Horton, Xiang Fu, Zilong Wang, Aliaksandra Shysheya, Jonathan Crabb, Shoko Ueda, Roberto Sordillo, Lixin Sun, Jake Smith, Bichlien Nguyen, Hannes Schulz, Sarah Lewis, Chin-Wei Huang, Ziheng Lu, Yichi Zhou, Han Yang, Hongxia Hao, Jielan Li, Chunlei Yang, Wenjie Li, Ryota Tomioka, Tian Xie.Opens in a new tab
    0 Commentaires ·0 Parts ·196 Vue
  • Ideas: AI for materials discovery with Tian Xie and Ziheng Lu
    www.microsoft.com
    Transcript[TEASER][MUSIC PLAYS UNDER DIALOGUE]TIAN XIE: Yeah,ZIHENG LU: Previously, a lot of people are using this atomistic simulator and this generative models alone. But if you think about it, now that we have these two foundation models together, it really can make things different, right. You have a very good idea generator. And you have a very good goalkeeper. And you put them together. They form a loop. And now you can use this loop to design materials really quickly.[TEASER ENDS]LINDSAY KALTER: Youre listening to Ideas, a Microsoft Research Podcast that dives deep into the world of technology research and the profound questions behind the code. In this series, well explore the technologies that are shaping our future and the big ideas that propel them forward.[MUSIC FADES]Im your guest host, Lindsay Kalter. Today Im talking to Microsoft Principal Research Manager Tian Xie and Microsoft Principal Researcher Ziheng Lu. Tian is doing fascinating work with MatterGen, an AI tool for generating new materials guided by specific design requirements. Ziheng is one of the visionaries behind MatterSim, which puts those new materials to the test through advanced simulations. Together, theyre redefining whats possible in materials science. Tian and Ziheng, welcome to the podcast.TIAN XIE: Very excited to be here.ZIHENG LU: Thanks, Lindsay, very excited.KALTER: Before we dig into the specifics of MatterGen and MatterSim, lets give our audience a sense of how you, as researchers, arrived at this moment. Materials science, especially at the intersection of computer science, is such a cutting-edge and transformative field. What first drew each of you to this space? And what, if any, moment or experience made you realize this was where you wanted to innovate? Tian, do you want to start?XIE: So I started working on AI for materials back in 2015, when I started my PhD. So I come as a chemist and materials scientist, but I was, kind of, figuring out what I want to do during my PhD. So there is actually one moment really drove me into the field. That was AlphaGo. AlphaGo was, kind of, coming out in 2016, where it was able to beat the world champion in go in 2016. I was extremely impressed by that because I, kind of, learned how to do go, like, in my childhood. I know how hard it is and how much effort those professional go players have spent, right, in learning about go. So I, kind of, have the feeling that if AI can surpass the world-leading go players, one day, it will too surpass material scientists, right, in their ability to design novel materials. So thats why I ended up deciding toLU: Thats very interesting, Tian. So, actually, I think I started, like, two years before you as a PhD student. So I, actually, I was trained as a computational materials scientist solely, not really an AI expert. But at that time, the computational materials science did not really work that well. It works but not working that well. So after, like, two or three years, I went back to experiments for, like, another two or three years because, I mean, the experiment is always the gold standard, right. And I worked on this experiments for a few years, and then about three years ago, I went back to this field of computation, especially because of AI. At that time, I think GPT and these large AI models that currently were using is not there, but we already have their prior forms like BERT, so we see the very large potential of AI. We know that these large AIs might work. So one idea is really to use AI to learn the entire space of materials and really grasp the physics there, and that really drove me to this field and thats why Im here working on this field, yeah.KALTER: Were going to get into what MatterGen and MatterSim mean for materials sciencethe potential, the challenges, and open questions. But first, give us an overview of what each of these tools are, how they do what they do, andas this show is about big ideasthe idea driving the work. Ziheng, lets have you go first.LU: So MatterSim is a tool to do in silico characterizations of materials. If you think about working on materials, you have several steps. You first need to synthesize it, and then you need to characterize this. Basically, you need to know what property, what structures, whatever stuff about these materials. So for MatterSim, what we want to do is to really move the characterization process, a lot of these processes, into using computations. So the idea behind MatterSim is to really learn the fundamentals of physics. So we learn the energies and forces and stresses from these atomic structures and the charge densities, all of these things, and then with these, we can really simulate any sort of materials using our computational machines. And then with these, we can really characterize a lot of these materials properties using our computer, that is very fast. Its much faster than we do experiments so that we can accelerate the materials design. So just in a word, basically, you input your material into your computer, a structure into your computer, and MatterSim will try to simulate these materials like what you do in a furnace or with an XRDKALTER: All right, thank you very much. Tian, why dont you tell us about MatterGen?XIE: Yeah, thank you. So, actually, Ziheng, once you start with explaining MatterSim, it makes it much easier for me to explain MatterGen. So MatterGen actually represents a new way to design materials with generative AI. Material discovery is like finding needles in a haystack. Youre looking for a material with a very specific property for a material application. For example, like finding a room-temperature superconductor or finding a solid that can conduct a lithium ion very well inside a battery. So its like finding one very specific material from a million, kind of, candidates. So the conventional way of doing material discovery is via screening, where you, kind of, go over millions of candidates to find the one that youre looking for, where MatterSim is able to significantly accelerate that process by making the simulation much faster. But its still very inefficient because you need to go through this million candidates, right. So with MatterGen, you can, kind of, directly generate materials given the prompts of the design requirements for the application. So this means that you can discover materialsdiscover useful materials much more efficiently. And it also allows us to explore a much larger space beyond the set of known materials.KALTER: Thank you, Tian. Can you tell us a little bit about how MatterGen and MatterSim work together?XIE: So you can really think about MatterSim and MatterGen accelerating different parts of materials discovery process. MatterSim is trying to accelerate the simulation of material properties, while MatterGen is trying to accelerate the search of novel material candidates. It means that they can really work together as a flywheel and you can compound the acceleration from both models. They are also both foundation AI models, meaning they can both be used for a broad range of materials design problems. So were really looking forward to see how they can, kind of, working together iteratively as a tool to design novel materials for a broad range of applications.LU: I think thats a very good, like, general introduction of how they work together. I think I can provide an example of how they really fit together. If you want a material with a specific, like, bulk modulus or lithium-ion conductivity or thermal conductivity for your CPU chips, so basically what you want to do is start with a pool of material structures, like some structures from the database, and then you compute or you characterize your wanted property from that stack of materials. And then what you do, youve got these properties and structure pairs, and you input these pairs into MatterGen. And MatterGen will be able to give you a lot more of these structures that are highly possible to be real. But the number will be very large. For example, for the bulk modulus, I dont remember the number we generated in our work was that like thousands, tens of thousands?XIE: Thousands, tens of thousands.LU: Yeah, that would be a very large number pool even with MatterGen, so then the next step will be, how would you like to screen that? You cannot really just send all of those structures to a lab to synthesize. Its too much, right. Thats when MatterSim again comes in. So MatterSim comes in and screen all those structures again and see which ones are the most likely to be synthesized and which ones have the closest property you wanted. And then after screening, you probably get five, 10 top candidates and then you send to a lab. Boom, everything goes down. Thats it.KALTER: Im wondering if theres any prior research or advancements that you drew from in creating MatterGen and MatterSim. Were there any specific breakthroughs that influenced your approaches at all?LU: Thanks, Lindsay. I think Ill take that question first. So interestingly for MatterSim, a very fundamental idea was drew from Chi Chen, who was a previous lab mate of mine and now also works for Microsoft at Microsoft Quantum. He made this fantastic model named M3GNet, which is a prior form of a lot of these large-scale models for atomistic simulations. That model, M3GNet, actually resolves the near ground state prediction problem. I mean, the near ground state problem sounds like a fancy but not realistic word, but what that actually means is that it can simulate materials at near-zero covalent states. So basically at very low temperatures. So at that time, we were thinking since the models are now able to simulate materials at their near ground states, its not a very large space. But if you also look at other larger models, like GPT whatever, those models are large enough to simulate entire human language. So its possible to really extend the capability from these such prior models to very large space. Because we believe in the capability of AI, then it really drove us to use MatterSim to learn the entire space of materials. I mean, the entire space really means the entire periodic table, all the temperatures and the pressures people can actually grasp.XIE: Yeah, I still remember a lot of the amazing works from Chi Chen whenever were, kind of, back working on property-prediction models. So, yeah, so the problem of generating materials from properties is actually a pretty old one. I still remember back in 2018, when I was, kind of, working on CGCNN (crystal graph convolutional neural networks) and giving a talk about property-prediction models, right, one of the first questions people asked is, OK, can you inverse this process? Instead of going from material structure to properties, can you, kind of, inversely generate the materials directly from their property conditions? So in a way, this is, kind of, like a dream for material scientistssome people even call it, like, holy grailbecause, like, the end goal is really about finding materials property, right, [that] will satisfy your application. So Ive been, kind of, thinking about this problem for a while and also there has been a lot of work, right, over the past few years in the community to build a generative model for materials. A lot of people have tried before, like 2020, using ideas like VAEs or GANs. But its hard to represent materials in this type of generative model architecture, and many of those models generated relatively poor candidates. So I thought it was a hard problem. I, kind of, know it for a while. But there is no good solutions back then. So I started to focus more on this problem during my postdoc, when I studied that in 2020 and I keep working on that in 2021. At the beginning, I wasnt really sure exactly what approach to take because its, kind of, like open question and really tried a lot of random ideas. So one day actually in my group back then with Tommi Jaakkola and Regina Barzilay at MITs CSAIL (Computer Science & Artificial Intelligence Laboratory), we, kind of, get to know this method called diffusion model. It was a very early stage of a diffusion model back then, but it already began to show very promising signs, kind of, achieving state of art in many problems like 3D point cloud generation and the 3D molecular conformer generation. So the work that really inspired me a lot is two works that was for molecular conformer generation. One is ConfGF, and one is GeoDiff. So they, kind of, inspired me to, kind of, focus more on diffusion models. That actually lead to CDVAE (crystal diffusion variational autoencoder). So its interesting that we, kind of, spend like a couple of weeks in trying all this diffusion idea, and without that much work, it actually worked quite out of box. And at that time, CDVAE achieves much better performance than any previous models in materials generation, and were, kind of, super happy with that. So after CDVAE, I, kind of, joined Microsoft, now working with more people together on this problem of generative model for materials. So we, kind of, know what the limitations of CDVAE are, is that it can do unconditional material generation well means it can generate novel material structures, but it is very hard to use CDVAE to do property-guided generations. So basically, it uses an architecture called a variational autoencoder, where you have a latent space. So the way that you do property-guided generation there was to do a, kind of, a gradient update inside the latent space. But because the latent space wasnt learned very well, so it actually you cannot do, kind of, good property-guided generation. We only managed to do energy-guided generation, but it wasnt successful in going beyond energy. So that comes us to really thinking, right, how can we make the property-guided generation much better? So I remember like one day, actually, my colleague, Daniel Zgner, who actually really showed me this blog which basically explains this idea of classifier-free guidance, which is the powerhouse behind the text-image generative models. And so, yeah, then we began to think about, can we actually make the diffusion model work for classifier-free guidance? That lead us to remove the, kind of, the variational autoencoder component from CDVAE and begin to work on a pure diffusion architecture. But then there was, kind of, a lot of development around that. But it turns out that classifier-free guidance is the key really to make property-guided generation work, and then combined with a lot more effort in, kind of, improving architecture and also generating more data and also trying out all these different downstream tasks that end up leading into MatterGen as we see today.KALTER: Yeah, I think youve both done a really great job of explaining how MatterGen and MatterSim work together and how MatterGen can offer a lot in terms of reducing the amount of time and work that goes into finding new materials. Tian, how does the process of using MatterGen to generate materials translate into real-world applications?XIE: Yeah, thats a fantastic question. So one way that I think about MatterGen, right, is that you can think about it as like a copilot for materials scientists, right. So they can help you to come up with, kind of, potential good hypothesis for the materials design problems that youre looking for. So say youre trying to design a battery, right. So you may have some ideas over, OK, what candidates you want to make, but this is, kind of, based on your own experience, right. Depths of experience as a researcher. But MatterGen is able to, kind of, learn from a very broad set of data, so therefore, it may be able to come up with some good suggestions, even surprising suggestions, for you so that you can, kind of, try this out, right, both with computation or even one day in wet lab and experimentally synthesize it. But I also want to note that this, in a way, this is still an early stage in generative AI for materials means that I dont expect all the candidates MatterGen generates will be, kind of, suits your needs, right. So you still need to, kind of, look into them with expertise or with some kind of computational screening. ButKALTER: I want to pivot a little bit to the MatterSim side of things. I know identifying new combinations of compounds is key to meeting changing needs for things like sustainable materials. But testing them is equally important to developing materials that can be put to use. Ziheng, how does MatterSim handle the uncertainty of how materials behave under various conditions, and how do you ensure that the predictions remain robust despite the inherent complexity of molecular systems?LU: Thanks. Thats a very, very good question. So uncertainty quantification is a key to make sure all these predictions and simulations are trustworthy. And thats actually one of the questions we got almost every time after a presentation. So people will ask, wellespecially those experimentalistswould ask, well, Ive been using your model; how do I know those predictions are true under the very complex conditions Im using in my experiments? So to understand how we deal with uncertainty, we need to know how MatterSim really functions in predicting an arbitrary property, especially under the condition you want, like the temperature and pressure. That would be quite complex, right? So in the ideal case, we would hope that by using MatterSim, you can directly simulate the properties you want using molecular dynamics combined with statistical mechanics. So if so, it would be easy to really quantify the uncertainty because there are just two parts: the error from the model and the error from the simulations, the statistical mechanics. So the error from the model will be able to be measured by, what we call, an ensemble. So basically you start with different random seeds when you train the model, and then when you predict your property, you use several models from the ensemble and then you get different numbers. If the variance from the numbers are very large, youll say the prediction is not that trustworthy. But a lot of times, we will see the variance is very small. So basically, an ensemble of several different models will give you almost exactly the same number; youre quite sure that the number is somehow very, like, useful. So thats one level of the way we want to get our property. But sometimes, its very hard to really directly simulate the property you want. For example, for catalytic processes, its very hard to imagine how you really get those coefficients. Its very hard. The process is just too complicated. So for that process, what we do is to really use the, what we call, embeddings learned from the entire material space. So basically that vector we learned for any arbitrary material. And then start from that, we build a very shallow layer of a neural network to predict the property, but that also means you need to bring in some of your experimental or simulation data from your side. And for that way of predicting a property to measure the uncertainty, its still like the two levels, right. So we dont really have the statistical error anymore, but what we have is, like, only the model error. So you can still stick to the ensemble, and then it will work, right. So to be short, so MatterSim can provide you an uncertainty to make sure the prediction tells you whether its true or not.KALTER: So in many ways, MatterSim is the realist in the equation, and its there to sort of be a gatekeeper for MatterGen, which is the idea generator.XIE: I really like the analogy.LU: Yeah.KALTER: As is the case with many AI models, the development of MatterGen and MatterSim relies on massive amounts of data. And here you use a simulation to create the needed training data. Can you talk about that process and why youve chosen that approach, Tian?XIE: So one advantage here is that we can really use large-scale simulation to generate data. So we have a lot of compute here at Microsoft on our Azure platform, right. So how we generate the data is that we use a method called density functional theory, DFT, which is a quantum mechanical method. And we use a simulation workflow built on top with DFT to simulate the stability of materials. So what we do is that we curate a huge amount of material structures from multiple different sources of open data, mostly including Materials Project and Alexandria database, and in total, there are around 3 million materials candidates coming from these two databases. But not all of these structures, they are stable. So therefore, we try to use DFT to compute their stability and try to filter down the candidates such that we are making sure that our training data only have the most stable ones. This leads into around 600,000 training data, which was used to train the base model of MatterGen. So I want to note that actually we also use MatterSim as part of the workflow because MatterSim can be used to prescreen unstable candidates so that we dont need to use DFT to compute all of them. I think at the end, we computed around 1 million DFT calculations where two-thirds of them, they are already filtered out by MatterSim, which saves us a lot of compute in generating our training data.LU: Tian, you have a very good description of how we really get those ground state structures for the MatterGen model. Actually, weve been also using MatterGen for MatterSim to really get the training data. So if you think about the simulation space of materials, its extremely large. So we would think it in a way that it has three axis, so basically the elements, the temperature, and the pressure. So if you think about existing databases, they have pretty good coverage of the elements space. Basically, we think about Materials Project, NOMAD, they really have this very good coverage of lithium oxide, lithium sulfide, hydrogen sulfide, whatever, those different ground-state structures. But they dont really tell you how these materials behave under certain temperature and pressure, especially under those extreme conditions like 1,600 Kelvin, which you really use to synthesize your materials. Thats where we really focused on to generate the data for MatterSim. So its really easy to think about how we generate the data, right. You put your wanted material into a pressure cooker, basically, molecular dynamics; it can simulate the materials behavior on the temperature and pressure. So thats it. Sounds easy, right? But thats not true because what we want is not one single material. What we want is the entire material space. So that will be making the effort almost impossible because the space is just so large. So thats where we really develop this active learning pipeline. So basically, what we do is, like, we generate a lot of these structures for different elements and temperatures, pressures. Really, really a lot. And then what we do is, like, we ask the active learning or the uncertainty measurements to really say whether the model knows about this structure already. So if the model thinks, well, I think I know the structure already. So then, we dont really calculate this structure using density function theory, as Tian just said. So this will really save us like 99% of the effort in generating the data. So in the end, by combining this molecular dynamics, basically pressure cooker, together with active learning, we gathered around 17 million data for MatterSim. So that was used to train the model. And now it can cover the entire periodic table and a lot of temperature and pressures.KALTER: Thank you, Ziheng. Now, Im sure this is not news to either one of you, given that youre both at the forefront of these efforts, but there are a growing number of tools aimed at advancing materials science. So what is it about MatterGen and MatterSim in their approach or capabilities that distinguish them?XIE: Yeah, I think I can start. So I think there is, in the past one year, there is a huge interest in building up generative AI tools for materials. So we have seen lots and lots of innovations from the community published in top conferences like NeurIPS, ICLR, ICML, etc. So I think what distinguishes MatterGen, in my point of view, are two things. First is that we are trained with a very big dataset that we curated very, very carefully, and we also spent quite a lot of time to refining our diffusion architecture, which means that our model is capable of generating very, kind of, high-quality, highly stable and novel materials. We have some kind of bar plot in our paper showcasing the advantage of our performance. I think thats one key aspect. And I think the second aspect, which in my point of view is even more important, is that it has the ability to do property-guided generation. Many of the works that we saw in the community, they are more focused on the problem of crystal structure prediction, which MatterGen can also do, but we focus more on really property-guided generation because we think this is one of the key problems that really materials scientists care about. So the ability to do a very broad range of property-guided generationand we have, kind of, both computational and now experimental result to validate thoseI think thats the second strong point for MatterGen.KALTER: Ziheng, do you want to add to that?LU: Yeah, thanks, Lindsay. So on the MatterSim side, I think its really the diverse condition it can handle that makes a difference. Weve been talking about, like, the training data we collected really covers the entire periodic table and also, more importantly, the temperatures from 0 Kelvin to 5,000 Kelvin and the pressures from 0 gigapascal to 1,000 gigapascal. That really covers what humans can control nowadays. I mean, its very hard to go beyond that. If you know anyone [who] can go beyond that, let me know. So that really makes MatterSim different. Like, it can handle the realistic conditions. I think beyond that, I would say the combo between MatterSim and MatterGen really makes these set of tools really different. So previously, a lot of people are using this atomistic simulator and this generative models alone. But if you think about it, now that we have these two foundation models together, they really can make things different, right. So we have predictor; we have the generator; you have a very good idea generator. And you have a very good goalkeeper. And you put them together. They form a loop. And now you can use this loop to design materials really quickly. So I would say to me, now, when I think about it, its really the combo that makes these set of tools different.KALTER: I know that Ive spoken with both of you recently about how theres so much excitement around this, and its clear that were on the precipice of thisas both of you have called ita paradigm shift. And Microsoft places a very strong emphasis on ensuring that its innovations are grounded in reality and capable of addressing real-world problems. So with that in mind, how do you balance the excitement of scientific exploration with the practical challenges of implementation? Tian, do you want to take this?XIE: Yeah, I think this is a very, very important point, because as there are so many hypes around AI that is happening right now, right. We must be very, very careful about the claims that we are making so that people will not have unrealistic expectations, right, over how these models can do. So for MatterGen, were pretty careful about that. Were trying to, basically, were trying to say that this is an early stage of generative AI in materials where this model will be improved over time quite significantly, but you should not say, oh, all the materials generated by MatterGen is going to be amazing. Thats not what is happening today. So we try to be very careful to understand how far MatterGen is already capable of designing materials with real-world impact. So therefore, we went all the way to synthesize one material that was generated by MatterGen. So this material we generated is called tantalum chromium oxide1. So this is a new material. It has not been discovered before. And it was generated by MatterGen by conditioning a bulk modulus equal to 200 gigapascal. Bulk modulus is, like, the compressiveness of the material. So we end up measuring the experimental synthesized material experimentally, and the measured bulk modulus is 169 gigapascal, which is within 20% of error. So this is a very good proof concept, in our point of view, to show that, oh, you can actually give it a prompt, right, and then MatterGen can generate a material, and the material actually have the property that is very close to your target. But its still a proof of concept. And were still working to see how MatterGen can design materials that are much more useful with a much broader range of applications. And Im sure that there will be more challenges we are seeing along the way. But were looking forward to further working with our experimental partners to, kind of, push this further. And also working with MatterSim, right, to see how these two tools can be used to design really useful materials and bringing this into real-world impact.LU: Yeah, Tian, I think thats very well said. Its not really only for MatterGen. For MatterSim, were also very careful, right. So we really want to make sure that people understand how these models really behave under their instructions and understand, like, what they can do and they cannot do. So I think one thing that we really care about is that in the next few, maybe one or two years, we want to really work with our experimental partners to make this realistic materials, like, in different areas so that we can, even us, can really better understand the limitations and at the same time explore the forefront of materials science to make this excitement become true.KALTER: Ziheng, could you give us a concrete example of what exactly MatterSim is capable of doing?LU: Now MatterSim can really do, like, whatever you have on a potential energy surface. So what that means is, like, anything that can be simulated with the energy and forces, stresses alone. So to give you an example, we can compute the first example would be the stability of a material. So basically, you input a structure, and from the energies of the relaxed structures, you can really tell whether the material is likely to be stable, like, the composition, right. So another example would be the thermal conductivity. Thermal conductivity is like a fundamental property of materials that tells you how fast heat can transfer in the material, right. So for MatterSim, it can really simulate how fast this heat can go through your diamond, your graphene, your copper, right. So basically, those are two examples. So these examples are based on energies and forces alone. But there are things MatterSim cannot doat least for now. For example, you cannot really do anything related to electronic structures. So you cannot really compute the light absorption of a semitransparent material. That would be a no-no for now.KALTER: Its clear from speaking with researchers, both from MatterSim and MatterGen, that despite these very rapid advancements in technology, you take very seriously the responsibility to consider the broader implications of the challenges that are still ahead. How do you think about the ethical considerations of creating entirely new materials and simulating their properties, particularly in terms of things like safety, sustainability, and societal impact?XIE: Yeah, thats a fantastic question. So its extremely important that we are making sure that these AI tools, they are not misused. A potential misuse, right, as you just mentioned, is that people begin to use these AI toolsMatterGen, MatterSimto, kind of, design harmful materials. There was actually extensive discussion over how generative AI tools that was originally purposed for drug design can be then misused to create bioweapons. So at Microsoft, we take this very seriously because we believe that when we create new technologies, you must also ensure that the technology is used responsibly. So we have an extensive process to ensure that all of our models respect those ethical considerations. In the meantime, as you mentioned, maybe sustainability and the societal impact, right, so theres a huge amount these AI toolsMatterGen, MatterSimcan do for sustainability because a lot of the sustainability challenges, they are really, at the end, materials design challenges, right. So therefore, I think that MatterGen and MatterSim can really help with that in solving, in helping us to alleviate climate change and having positive societal impact for the broader society.KALTER: And, Ziheng, how about from a simulation standpoint?LU: Yeah, I think Tian gave a very good, like, description. At Microsoft, we are really careful about these ethical, like, considerations. So I would add a little bit on the more, like, the bright side of things. Like, so for MatterSim, like, it really carries out these simulations at atomic scales. So one thing you can think about is really the educational purpose. So back in my bachelor and PhD period, so I would sit, like, at the table and really grab a pen to really deal with those very complex equations and get into those statistics using my pen. Its really painful. But now with MatterSim, these simulation tools at atomic level, what you can do is to really simulate the reactions, the movement of atoms, at atomic scale in real time. You can really see the chemical reactions and see the statistics. So you can get really the feeling, like very direct feeling, of how the system works instead of just working on those toy systems with your pen. I think its going to be a very good educational tool using MatterSim, yeah. Also MatterGen. MatterGen as, like, a generative tool and generating those i.i.d. (independent and identically distributed) distributions, it will be a perfect example to show the students how the Boltzmann distribution works. I think, Tian, you will agree with that, right?XIE: 100%. Yeah, I really, really like the example that Ziheng mentioned about the educational purposes. I still remember, like, when I was, kind of, learning material simulation class, right. So everything is DFT. You, kind of, need to wait for an hour, right, for getting some simulation. Maybe then youll make some animation. Now you can do this in real time. This is, like, a huge step forward, right, for our young researchers to, kind of, gaining a sense, right, about how atoms interact at an atomic level.LU: Yeah, and the results are really, I mean, true; not really those toy models. I think its going to be very exciting stuff.KALTER: And, Tian, Im directing this question to you, even though, Ziheng, Im sure you can chime in, as well. But, Tian, I know that you and I have previously discussed this specifically. I know that you said back in, you know, 2017, 2018, that you knew an AI-based approach to materials science was possible but that even you were surprised by how far the technology has come so fast in aiding this area. What is the status of these tools right now? Are they in use? And if so, who are they available to? And, you know, whats next for them?XIE: Yes, this is a fantastic question, right. So I think for AI generative tools like MatterGen, as I said many times earlier, its still in its early stages. MatterGen is the first tool that we managed to show that generative AI can enable very broad property-guided generation, and we have managed to have experimental validation to show its possible. But it will take more work to show, OK, it can actually design batteries, can design solar cells, right. It can design really useful materials in these broader domains. So this is, kind of, exactly why we are now taking a pretty open approach with MatterGen. We make our code, our training data, and model weights available to the general public. Were really hoping the community can really use our tools to the problem that they care about and even build on top of that. So in terms of what next, I always like to use what happened with generative AI for drugs, right, to kind of predict how generative AI will impact materials. Three years ago, there is a lot of research around generative model for drugs, first coming from the machine learning community, right. So then all the big drug companies begin to take notice, and then there are, kind of, researchers in these drug companies begin to use these tools in actual drug design processes. From my colleague, Marwin Segler, because he, kind of, works together with Novartis in Microsoft and Novartis collaboration, he has been basically telling me that at the beginning, all the chemists in the drug companies, theyre all very suspicious, right. The molecules generated by these generative models, they all look a bit weird, so they dont believe this will work. But once these chemists see one or two examples that actually turns out to be performing pretty well from the experimental result, then they begin to build more trust, right, into these generative AI models. And today, these generative AI tools, they are part of the standard drug discovery pipeline that is widely used in all the drug companies. That is today. So I think generative AI for materials is going through a very similar period. People will have doubts; people will have suspicions at the beginning. But I think in three years, right, so it will become a standard tool over how people are going to design new solar cells, design new batteries, and many other different applications.KALTER: Great. Ziheng, do you have anything to add to that?LU: So actually for MatterSim, we released the model, I think, back in last year, December. I mean, both the weights and the models, right. So were really grateful how much the community has contributed to the repo. And now, I mean, we really welcome the community to contribute more to both MatterSim and MatterGen via our open-source code bases. So, I mean, the community effort is really important, yeah.KALTER: Well, it has been fascinating to pick your brains, and as we close, you know, I know that youre both capable of quite a bit, which you have demonstrated. I know that asking you to predict the future is a big ask, so I wont explicitly ask that. But just as a fun thought exercise, lets fast-forward 20 years and look back. How have MatterGen and MatterSim and the big ideas behind them impacted the world, and how are people better off because of how you and your teams have worked to make them a reality? Tian, you want to start?XIE: Yeah, I think one of the biggest challenges our human society is going to face, right, in the next 20 years is going to be climate change, right, and there are so many materials design problems people need to solve in order to properly handle climate change, like finding new materials that can absorb CO2 from atmosphere to create a carbon capture industry or have a battery materials that is able to do large-scale energy grid storage so that we can fully utilizing all the wind powers and the solar power, etc., right. So if you want me to make one prediction, I really believe that these AI tools, like MatterGen and MatterSim, is going to play a central role in our humans ability to design these new materials for climate problems. So therefore in 20 years, I would like to see we have already solved climate change, right. We have large-scale energy storage systems that was designed by AI that is basically that we have removed all the fossil fuels, right, from our energy production, and for the rest of the carbon emissions that is very hard to remove, we will have a carbon capture industry with materials designed by AI that absorbs the CO2 from the atmosphere. Its hard to predict exactly what will happen, but I think AI will play a key role, right, into defining how our society will look like in 20 years.LU: Tian, very well said. So I think instead of really describing the future, I would really quote a science fiction scene in Iron Man. So basically in 20 years, I will say when we want to really get a new material, we will just sit in an office and say, Well, J.A.R.V.I.S., can you design us a new material that really fits my newest MK 7 suit? That will be the end. And it will run automatically, and we get this auto lab running, and all those MatterGen and MatterSim, these AI models, running, and then probably in a few hours, in a few days, we get the material.KALTER: Well, I think I speak for many people from several industries when I say that I cannot wait to see what is on the horizon for these projects. Tian and Ziheng, thank you so much for joining us on Ideas. Its been a pleasure.[MUSIC]XIE: Thank you so much.LU: Thank you.[MUSIC FADES]
    0 Commentaires ·0 Parts ·200 Vue
  • AutoGen v0.4: Reimagining the foundation of agentic AI for scale, extensibility, and robustness
    www.microsoft.com
    Over the past year, our work on AutoGen has highlighted the transformative potential of agentic AI and multi-agent applications. Today, we are excited to announce AutoGen v0.4, a significant milestone informed by insights from our community of users and developers. This update represents a complete redesign of the AutoGen library, developed to improve code quality, robustness, generality, and scalability in agentic workflows.The initial release of AutoGen generated widespread interest in agentic technologies. At the same time, users struggled with architectural constraints, an inefficient API compounded by rapid growth, and limited debugging and intervention functionality. Feedback highlighted the need for stronger observability and control, more flexible multi-agent collaboration patterns, and reusable components. AutoGen v0.4 addresses these issues with its asynchronous, event-driven architecture.This update makes AutoGen more robust and extensible, enabling a broader range of agentic scenarios. The new framework includes the following features, inspired by feedback from both within and outside Microsoft.Asynchronous messaging: Agents communicate through asynchronous messages, supporting both event-driven and request/response interaction patterns.Modular and extensible: Users can easily customize systems with pluggable components, including custom agents, tools, memory, and models. They can also build proactive and long-running agents using event-driven patterns.Observability and debugging: Built-in metric tracking, message tracing, and debugging tools provide monitoring and control over agent interactions and workflows, with support for OpenTelemetry for industry-standard observability.Scalable and distributed: Users can design complex, distributed agent networks that operate seamlessly across organizational boundaries.Built-in and community extensions: The extensions module enhances the frameworks functionality with advanced model clients, agents, multi-agent teams, and tools for agentic workflows. Community support allows open-source developers to manage their own extensions.Cross-language support: This update enables interoperability between agents built in different programming languages, with current support for Python and .NET and additional languages in development.Full type support: Interfaces enforce type checks at build time, improving robustness and maintaining code quality.Spotlight: blog postGraphRAG auto-tuning provides rapid adaptation to new domainsGraphRAG uses LLM-generated knowledge graphs to substantially improve complex Q&A over retrieval-augmented generation (RAG). Discover automatic tuning of GraphRAG for new datasets, making it more accurate and relevant.Read moreOpens in a new tab New AutoGen frameworkAs shown in Figure 1, the AutoGen framework features a layered architecture with clearly defined responsibilities across the framework, developer tools, and applications. The framework comprises three layers: core, agent chat, and first-party extensions.Core: The foundational building blocks for an event-driven agentic system.AgentChat:A task-driven, high-level API built on the core layer, featuring group chat, code execution, pre-built agents, and more. This layer is most similar to AutoGen v0.2 (opens in new tab), making it the easiest API to migrate to.Extensions: Implementations of core interfaces and third-party integrations, such as the Azure code executor and OpenAI model client.Figure 1. The v0.4 update introduces a cohesive AutoGen ecosystem that includes the framework, developer tools, and applications. The frameworks layered architecture clearly defines each layers functionality. It supports both first-party and third-party applications and extensions.In addition to the framework, AutoGen 0.4 includes upgraded programming tools and applications, designed to support developers in building and experimenting with AutoGen.AutoGen Bench: Enables developers to benchmark their agents by measuring and comparing performance across tasks and environments.AutoGen Studio: Rebuilt on the v0.4 AgentChat API, this low-code interface enables rapid prototyping of AI agents. It introduces several new capabilities:Real-time agent updates: View agent action streams in real time with asynchronous, event-driven messages.Mid-execution control: Pause conversations, redirect agent actions, and adjust team composition. Then seamlessly resume tasks.Interactive feedback through the UI: Add a UserProxyAgent to enable user input and guidance during team runs in real time.Message flow visualization: Understand agent communication through an intuitive visual interface that maps message paths and dependencies.Drag-and-drop team builder: Design agent teams visually using an interface for dragging components into place and configuring their relationships and properties.Third-party component galleries: Import and use custom agents, tools, and workflows from external galleries to extend functionality.Magentic-One: A new generalist multi-agent application to solve open-ended web and file-based tasks across various domains. This tool marks a significant step toward creating agents capable of completing tasks commonly encountered in both work and personal contexts.Migrating to AutoGen v0.4We implemented several measures to facilitate a smooth upgrade from the previous v0.2 API, addressing core differences in the underlying architecture.First, the AgentChat API maintains the same level of abstraction as v0.2, making it easy to migrate existing code to v0.4. For example, AgentChat offers an AssistantAgent and UserProxy agent with similar behaviors to those in v0.2. It also provides a team interface with implementations like RoundRobinGroupChat and SelectorGroupChat, which cover all the capabilities of the GroupChat class in v0.2. Additionally, v0.4 introduces many new functionalities, such as streaming messages, improved observability, saving and restoring task progress, and resuming paused actions where they left off. For detailed guidance, refer to the migration guide (opens in new tab).Looking forwardThis new release sets the stage for a robust ecosystem and strong foundation to drive advances in agentic AI application and research. Our roadmap includes releasing .NET support, introducing built-in, well-designed applications and extensions for challenging domains, and fostering a community-driven ecosystem. We remain committed to the responsible development of AutoGen and its evolving capabilities.We encourage you to engage with us on AutoGens Discord server (opens in new tab) and share feedback on the official AutoGen repository (opens in new tab) via GitHub Issues. Stay up to date with frequent AutoGen updates via X.AcknowledgmentsWe would like to thank the many individuals whose ideas and insights helped formalize the concepts introduced in this release, including Rajan Chari, Ece Kamar, John Langford, Ching-An Chen, Bob West, Paul Minero, Safoora Yousefi, Will Epperson, Grace Proebsting, Enhao Zhang, and Andrew Ng.Opens in a new tab
    0 Commentaires ·0 Parts ·176 Vue
  • Research Focus: Week of December 16, 2024
    www.microsoft.com
    Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft.NEW RESEARCHThe Compute Express Link (CXL) open standard interconnect enables integration of diverse types of memory into servers via its byte-addressable SerDes links. To fully utilize CXL-based heterogeneous memory systems (which combine different types of memory with varying access speeds), its necessary to implement efficient memory tieringa strategy to manage data placement across memory tiers for optimal performance. Efficiently managing these memory systems is crucial, but has been challenging due to the lack of precise and efficient tools for understanding how memory is accessed.In a recent paper: NeoMem: Hardware/Software Co-Design for CXL-Native Memory Tiering researchers from Microsoft propose a novel solution which features a hardware/software co-design to address this problem. NeoMem offloads memory profiling functions to CXL device-side controllers, integrating a dedicated hardware unit called NeoProf, which monitors memory accesses and provides the operating system (OS) with crucial page hotness statistics and other system state information. On the OS kernel side, the researchers designed a revamped memory-tiering strategy, enabling accurate and timely hot page promotion based on NeoProf statistics. Implemented on a real FPGA-based CXL memory platform and Linux kernel v6.3, NeoMem demonstrated 32% to 67% geomean speedup over several existing memory tiering solutions.Read the paperNEW RESEARCHPlanning and conducting chemical syntheses is a significant challenge in the discovery of functional small molecules, which limits the potential of generative AI for molecular inverse design. Although early machine learning-based retrosynthesis models have shown the ability to predict reasonable routes, they are less accurate for infrequent, yet important reactions.In a recent paper: Chimera: Accurate retrosynthesis prediction by ensembling models with diverse inductive biases, researchers from Microsoft and external colleagues address this limitation, with a new framework for building highly accurate reaction models. Chimera incorporates two newly developed models, each achieving state-of-the-art performance in their respective categories. Evaluations by PhD-level organic chemists show that Chimeras predictions are preferred for their higher quality compared to baseline models.The researchers further validate Chimeras robustness by applying its largest-scale model to an internal dataset from a major pharmaceutical company, demonstrating its ability to generalize effectively under distribution shifts. This new framework shows the potential to substantially accelerate the development of even more accurate and versatile reaction prediction models.Read the paperMicrosoft research podcastAbstracts: August 15, 2024Advanced AI may make it easier for bad actors to deceive others online. A multidisciplinary research team is exploring one solution: a credential that allows people to show theyre not bots without sharing identifying information. Shrey Jain and Zo Hitzig explain.Listen nowOpens in a new tab NEW RESEARCHIn bioinformatics and computational biology, data analysis often involves chaining command-line programs developed by specialized teams at different institutions. These tools, which vary widely in age, software stacks, and dependencies, lack a common programming interface, which makes integration, workflow management and reproducibility challenging.A recent article (opens in new tab) emphasizes the development, adoption and implementation of the Global Alliance for Genomics and Health (GA4GH) Task Execution Service (TES) API, created in collaboration with researchers at Microsoft and other institutions. The TES API offers a unified schema and interface for submitting and managing tasks, seamlessly bridging gaps between on-premises high-performance and high-throughput computing systems, cloud platforms, and hybrid infrastructures. Its flexibility and extensibility have already made it a critical asset for applications ranging from federated data analysis to load balancing across multi-cloud systems.Adopted by numerous service providers and integrated into several workflow engines, TES empowers researchers to execute complex computational tasks through a single, abstracted interface. This eliminates compatibility hurdles, accelerates research timelines, reduces costs and enables compute to data solutionsessential for tackling the challenges of distributed data analysis.Read the paperNEW RESEARCHIncreasing use of code agents for AI-assisted coding and software development has brought safety and security concerns, such as generating or executing malicious code, which have become significant barriers to real-world deployment of these agents.In a recent paper: RedCode: Risky Code Execution and Generation Benchmark for Code Agents, published at NeurIPS 2024, researchers from Microsoft and external colleagues propose comprehensive and practical evaluations on the safety of code agents. RedCode is an evaluation platform with benchmarks grounded in four key principles: real interaction with systems, holistic evaluation of unsafe code generation and execution, diverse input formats, and high-quality safety scenarios and tests.This research evaluated three agents based on various large language models (LLMs), providing insights into code agents vulnerabilities. For instance, results showed that agents are more likely to reject executing unsafe operations on the operating system. Unsafe operations described in natural text lead to a lower rejection rate than those in code format. Additional evaluations revealed that more capable base models and agents with stronger overall coding abilities, such as GPT-4, tend to produce more sophisticated harmful software.These findings highlight the need for stringent safety evaluations for diverse code agents. The underlying dataset and related code are publicly available at https://github.com/AI-secure/RedCode (opens in new tab).Read the paperNEW RESEARCHAlthough large language models (LLMs) excel at language-focused tasks like news writing, document summarization, customer service, and supporting virtual assistants, they can face challenges when it comes tolearning and inference on numeric and structured industry data, such as tabular and time series data. To address these issues, researchers from Microsoft propose a new approach to building industrial foundation models (IFMs). As outlined in a recent blog post, they have successfully demonstrated the feasibility of cross-domain universal in-context learning on tabular data and the significant potential it could achieve.The researchers designed Generative Tabular Learning (opens in new tab)(GTL), a new framework that integrates multi-industry zero-shot and few-shot learning capabilities into LLMs. This approach allows the models to adapt and generalize to new fields, new data, and new tasks more effectively, flexibly responding to diverse data science tasks. This technical paradigm has been open-sourced (opens in new tab)to promote broader use.Read the paperMicrosoft Research in the newsMicrosofts smaller AI model beats the big guys: Meet Phi-4, the efficiency kingDecember 12, 2024Microsoft launched a new artificial intelligence model today that achieves remarkable mathematical reasoning capabilities while using far fewer computational resources than its larger competitors. Microsoft researcher Ece Kamar discusses the future of AI agents in 2025Tech Brew | December 12, 2024With AI agents widely expected to take off in 2025, the director of Microsofts AI Frontiers lab weighs in on the future of this technology, the safeguards needed, and the year ahead in AI research. A new frontier awaits computing with lightDecember 12, 2024In the guts of a new type of computer, a bunch of tiny LEDs emit a green glow. Those lights have a job to do. Theyre performing calculations. Right now, this math is telling the computer how to identify handwritten images of numbers. The computer is part of a research program at Microsoft. View more news and awards Opens in a new tab
    0 Commentaires ·0 Parts ·198 Vue
  • Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness
    www.microsoft.com
    Transcript[TEASER][MUSIC PLAYS UNDER DIALOGUE]MADELEINE DAEPP: Last summer, I was working on all of these like pro-democracy applications, trying to build out, like, a social data collection tool with AI, all this kind of stuff. And I went to the elections workshop that the Democracy Forward team at Microsoft had put on, and Dave Leichtman, who, you know, was the MC of that work, was really talking about how big of a global elections year 2024 was going to be. Over 70 countries around the world. And, you know, were coming from Microsoft Research, where we were so excited about this technology. And then, all of a sudden, I was at the elections workshop, and I thought, oh no, [LAUGHS] like, this is not good timing.ROBERT OSAZUWA NESS: What are we really talking about in the context of deepfakes in the political context, elections context? Its deception, right. Im trying to use this technology to, say, create some kind of false record of events in order to convince people that something happened that actually did not happen. And so that goal of deceiving, of creating a false record, thats kind of how I have been thinking about deepfakes in contrast to the broader category of generative AI.[TEASER ENDS]GINNY BADANES: Welcome to Ideas, a Microsoft Research Podcast that dives deep into the world of technology research and the profound questions behind the code. In this series, well explore the technologies that are shaping our future and the big ideas that propel them forward.[MUSIC FADES]Im your guest host, Ginny Badanes, and I lead Microsofts Democracy Forward program, where weve spent the past year deeply engaged in supporting democratic elections around the world, including the recent US elections. We have been working on everything from raising awareness of nation-state propaganda efforts to helping campaigns and election officials prepare for deepfakes to protecting political campaigns from cyberattacks. Today, Im joined by two researchers who have also been diving deep into the impact of generative AI on democracy.Microsoft senior researchers Madeleine Daepp and Robert Osazuwa Ness are studying generative AIs influence in the political sphere with the goal of making AI systems more robust against misuse while supporting the development of AI tools that can strengthen democratic processes and systems. They spent time in Taiwan and India earlier this year, where both had big democratic elections. Madeleine and Robert, welcome to the podcast!MADELEINE DAEPP: Thanks for having us.ROBERT OSAZUWA NESS: Thanks for having us.BADANES: So I have so many questions for you allfrom how you conducted your research to what youve learnedand Im really interested in what you think comes next. But first, lets talk about how you got involved in this in the first place. Could you both start by telling me a little bit about your backgrounds and just what got you into AI research in the first place?DAEPP: Sure. So Im a senior researcher here at Microsoft Research in the Special Projects team. But I did my PhD at MIT in urban studies and planning. And I think a lot of folks hear that field and think, oh, you know, housing, like upzoning housing and figuring out transportation systems. But it really is a field thats about little d democracy, right. About how people make choices about shared public spaces every single day. You know, I joined Microsoft first off to run this, sort of, technology deployment in the city of Chicago, running a low-cost air-quality-sensor network for the city. And when GPT-4 came out, you know, first ChatGPT, and then we, sort of, had this big recognition of, sort of, how well this technology could do in summarizing and in representing opinions and in making sense of big unstructured datasets, right. I got actually very excited. Like, I thought this could be used for town planning processes. [LAUGHS] Like, I thought we could I had a whole project with a wonderful intern, Eva Maxfield Brown, looking at, can we summarize planning documents using AI? Can we build out policies from conversations that people have in shared public spaces? And so that was very much the impetus for thinking about how to apply and build things with this amazing new technology in these spaces.BADANES: Robert, I think your background is a little bit different, yet you guys ended up in a similar place. So how did you get there?NESS: Yeah, so Im also on Special Projects, Microsoft Research. My work is focusing on large language models, LLMs. And, you know, so I focus on making these models more reliable and controllable in real-world applications. And my PhD is in statistics. And so I focus a lot on using just basic bread-and-butter statistical methods totry and control and understand LLM behavior. So currently, for example, Im leading a team of engineers and running experiments designed to find ways to enhance a graphical approach to combining information retrieval in large language models. I work on statistical tests for testing significance of adversarial attacks on these models.BADANES: Wow.NESS: So, for example, if you find a way to trick one of these models into doing something its not supposed to do, I make sure that its not, like, a random fluke; that its something thats reproducible. And I also work at this intersection between generative AI and, you know, Bayesian stuff, causal inference stuff. And so I came at looking at this democracy work through an alignment lens. So alignment is this task in AI of making sure these models align with human values and goals. And what I was seeing was a lot of research in the alignment space was viewing it as a technical problem. And, you know, as a statistician, were trained to consult, right. Like, to go to the actual stakeholders and say, hey, what are your goals? What are your values? And so this democracy work was an opportunity to do that in Microsoft Research and connected with Madeleine. So she was planning to go to Taiwan, and kind of from a past life, I wanted to become a trade economist and learned Mandarin. And so I speak fluent Mandarin and seemed like a good matchup of our skill sets BADANES: Yeah.NESS: and interests. And so thats, kind of, how we got started.BADANES: So, Madeleine, you brought the two of you together, but what started it for you? This podcast is all about big ideas. What sparked the big idea to bring this work that youve been doing on generative AI into the space of democracy and then to go out and find Robert and match up together?DAEPP: Yeah, well, Ginny, it was you. [LAUGHS] It was actually your team.BADANES: I didnt plant that! [LAUGHS]DAEPP: So, you know, I think last summer, I was working on all of these like pro-democracy applications, trying to build out, like, a social data collection tool with AI, all this kind of stuff. And I went to the elections workshop that the Democracy Forward team at Microsoft had put on, and Dave Leichtman, who, you know, was the MC of that work, was really talking about how big of a global elections year 2024 was going to be, that thishe was calling it Votorama. You know, that term didnt take off. [LAUGHTER] The term that has taken off is biggest election year in history, right. Over 70 countries around the world. And, you know, were coming from Microsoft Research, where we were so excited about this technology. Like, when it started to pass theory of mind tests, right, which is like the ability to think about how other people are thinking, like, we were all like, oh, this is amazing; this opens up so many cool application spaces, right. When it was, like, passing benchmarks for multilingual communication, again, like, we were so excited about the prospect of building out multilingual systems. And then, all of a sudden, I was at the elections workshop, and I thought, oh no, [LAUGHS] this is not good timing.BADANES: Yeah DAEPP: And because so much of my work focuses on, you know, building out computer science systems like, um, data science systems or AI systems but with communities in the loop, I really wanted to go to the folks most affected by this problem. And so I proposed a project to go to Taiwan and to study one of the it was the second election of 2024. And Taiwan is known to be subject to more external disinformation than any other place in the world. So if you were going to see something anywhere, you would see it there. Also, it has amazing civil society response so really interesting people to talk to. But I do not speak, Chinese, right. Like, I dont have the context; I dont speak the language. And so part of my process is to hire a half-local team. We had an amazing interpreter, Vickie Wang, and then a wonderful graduate student, Ti-Chung Cheng, who supported this work. But then also my team, Special Projects, happened to have this person who, like, not only is a leading AI researcher publishing in NeurIPS, like building out these systems, but who also spoke Chinese, had worked in technology security, and had a real understanding of international studies and economics as well as AI. And so for me, like, finding Robert as a collaborator was kind of a unicorn moment.BADANES: So it sounds like it was a match made in heaven of skill sets and abilities. Before we get into what you all found there, which I do want to get into, I first think its helpfulI dont know, when were dealing with these, like, complicated issues, particularly things that are moving and changing really quickly, sometimes I found its helpful to agree on definitions and sort of say, this is what we mean when we say this word. And that helps lead to understanding. So while I know that this research is about more than deepfakesand well talk about some of the things that are more than deepfakesI am curious how you all define that term and how you think of it. Because this is something that I think is constantly moving and changing. So how have you all been thinking about the definition of that term?NESS: So Ive been thinking about it in terms of the intention behind it, right. We say deepfake, and I think colloquially that means kind of all of generative AI. Thats a bit unfortunate because there are things that are you know, you can use generative AI to generate cartoons BADANES: Right.NESS: or illustrations for a childrens book. And so in thinking about what are we really talking about in the context of deepfakes in the political context, elections context, its deception, right. Im trying to use this technology to, say, create some kind of false record of events, say, for example, something that a politician says, in order to convince people that something happened that actually did not happen.BADANES: Right.NESS: And so that goal of deceiving, of creating a false record, thats kind of how I have been thinking about deepfakes in contrast to the broader category of generative AI and deepfakes in terms of being a malicious use case. There are other malicious use cases that dont necessarily have to be deceptive, as well, as well as positive use cases.BADANES: Well, that really, I mean, that resonates with me because what we found was when you use the term deceptionor another term we hear a lot that I think works is fraudthat resonates with other people, too. Like, that helps them distinguish between neutral uses or even positive uses of AI in this space and the malicious use cases, though to your point, I suppose theres probably even deeper definitions of what malicious use could look like. Are you finding that distinction showing up in your work between fraud and deception in these use cases? Is that something that has been coming through?DAEPP: You know, we didnt really think about the term fraud until we started prepping for this interview with you.financially was not OK, right. Thats fraud. Using AI for the purposes of nudifying, like removing somebodys clothes and then sextorting them, right, extorting them for money out of fear that this would be shared, like, that was not OK. And those are such clear lines. And it was clear that theres a set of uses of generative AI also in the political space, you know, of saying this person said something that they didnt, BADANES: Mm-hmm.DAEPP: of voter suppression, that in general, theres a very clear line that when it gets into that fraudulent place, when it gets into that simultaneously deceptive and malicious space, thats very clearly a no-go zone.NESS: Oftentimes during this research, I found myself thinking about this dichotomy in cybersecurity of state actors, or broadly speaking, kind of, political actors, versus criminals.BADANES: Right.NESS: And its important to understand the distinction because criminals are typically trying to target targets of opportunity and make money, while state-sponsored agents are willing to spend a lot more money and have very specific targets and have a very specific definition of success. And so, like, this fraud versus deception kind of feels like that a little bit in the sense that fraud is typically associated with criminal behavior, while, say, I might put out deceptive political messaging, but it might fall within the bounds of free speech within my country.BADANES: Right, yeah.NESS: And so this is not to say I disagree with that, but it just, actually, that it could be a useful contrast in terms of thinking about the criminal versus the political uses, both legitimate and illegitimate.BADANES: Well, I also think those of us who work in the AI space are dealing in very complicated issues that the majority of the world is still trying to understand. And so any time you can find a word that people understand immediately in order to do the, sort of, storytelling: the reason that we are worried about deepfakes in elections is because we do not want voters to be defrauded. And that, we find really breaks through because people understand that term already. Thats a thing that they already know that they dont want to be; they do not want to be defrauded in their personal life or in how they vote. And so that really, I found, breaks through. But as much as I have talked about deepfakes, I know that youand I know theres a lot of interest in talking about deepfakes when we talk about this subjectbut I know your research goes beyond that. So what other forms of generative AI did you include in your research or did you encounter in the effort that you were doing both in Taiwan and India?DAEPP: Yeah. So let me tell you just, kind of, a big overview of, like, our taxonomy. Because as you said, like, so much of this is just about finding a word, right. Like, so much of it is about building a shared vocabulary so that we can start to have these conversations. And so when we looked at the political space, right, elections, so much of what it means to win an election is kind of two things. Its building an image of a candidate, right, or changing the image of your opposition and telling a story, right.BADANES: Mm-hmm.DAEPP: And so if you think about image creation, of course, there are deepfakes. Like, of course, there are malicious representations of a person. But we also saw a lot of what were calling auth fakes, like authorized fakes, right. Candidates who would actually go to a consultancy and, like, get their bodies scanned so that videos could be made of them. Theyd get their voices, a bunch of snippets of their voices, recorded so that then there could be personalized phone calls, right. So these are authorized uses of their image and likeness. Then we saw a term Ive heard in, sort of, the ether is soft fakes. So again, likenesses of a candidate, this time not necessarily authorized but promotional. They werent people on TwitterI guess, Xon Instagram, they were sharing images of the candidate that they supported that were really flattering or silly or, you know, just really sort of in support of that person. So not with malicious intent, right, with promotional intent. And then the last one, and this, I think, was Roberts term, but in this image creation category, you know, one thing we talked about was just the way that people were also making fun of candidates. And in this case, this is a bit malicious, right. Like, theyre making fun of people; theyre satirizing them. But its not deceptive because, BADANES: Right DAEPP: you know, often it has that hyper-saturated meme aesthetic. Its very clearly AI or just, you know, per like, sort of, US standards for satire, like, a reasonable person would know that it was silly. And so Robert said, you know, oh, these influencers, theyre not trying to deceive people; like, theyre not trying to lie about candidates. Theyre trying to roast them. [LAUGHTER] And so we called it a deep roast. So thats, kind of, the images of candidates. I will say we also looked at narrative building, and there, one really important set of things that we saw was what we call text to b-roll. So, you know, a lot of folks think that you cant really make AI videos because, like, Sora isnt out yet[1]. But in fact, what there is a lot of is tooling to, sort of, use AI to pull from stock imagery and b-roll footage and put together a 90-second video. You know, it doesnt look like AI; its a real video. So text to b- roll, AI pasta? So if you know the threat intelligence space, theres this thing called copy pasta, where people just BADANES: Sure.DAEPP: its just a fun word for copy-paste. People just copy-paste terms in order to get a hashtag trending. And we talked to an ex-influencer who said, you know, were using AI to do this. And I asked him why. And he said, well, you know, if you just do copy-paste, the fact-checkers catch it. But if you use AI, they dont. And so AI pasta. And theres also some research showing that this is potentially more persuasive than copy-paste BADANES: Interesting.DAEPP: because people think theres a social consensus. And then the last one, this is my last of the big taxonomy, and, Robert, of course, jump in on anything you want to go deeper on, but Fake News 2.0. You know, Im sure youve seen this, as well. Just this, like, creation of news websites, like entire new newspapers that nobodys ever heard of. AI avatars that are newscasters. And this is something that was happening before. Like, theres a long tradition of pretending to be a real news pamphlet or pretending to be a real outlet. But theres some interesting work out of Patrick Warren at Clemson has looked at some of these and shown the quality and quantity of articles on these things has gotten a lot better and, you know, improves as a step function of, sort of, when new models come out.NESS: And then on the flip side, you have people using the same technologies but stated clearly that its AI generated, right. So we mentioned the AI avatars. In India, theres this theres Bhoomi, which is a AI news anchor for agricultural news, and it states there in clear terms that shes not real. But of course, somebody who wanted to be deceptive could use the same technology to portray something that looks like a real news broadcast that isnt. You know, and, kind of, going back, Madeleine mentioned deep roasts, right, so, kind of, using this technology to create satirical depictions of, say, a political opponent. Somebody, a colleague, sent something across my desk. It was a Douyin accountso Douyin is the version of TikTok thats used inside China; BADANES: OK.NESS: same company, but its the internal version of TikTokthat was posting AI-generated videos of politicians in Taiwan. And these were excellent, real good-quality AI-generated deepfakes of these politicians. But some of them were, first off, on the bottom of all of them, it said, this is AI-generated content.BADANES: Oh.NESS: And some of them were, kind of, obviously meant to be funny and were clearly fake, like still images that were animated to make somebody singing a funny song, for example. A very serious politician singing a very silly song. And its a still image. Its not even, its not even BADANES: a video.NESS: like video.BADANES: Right, right.NESS: And so I messaged Puma Shen, who is one of the legislators in Taiwan who was targeted by these attacks, and I said, what do you think about this? And, you know, he said, yeah, they got me. [LAUGHTER] And I said, you know, do you think people believe this? I mean, there are people who are trying to debunk it. And he said, no, our supporters dont believe it, but, you know, people who support the other side or people who are apolitical, they might believe it, or even if it says its fakethey know its fakebut they might still say that, yeah, but this is something they would do, right. This is BADANES: Yeah, it fits the narrative. Yeah.NESS: it fits the narrative, right. And that, kind of, that really, you know, I had thought of this myself, but just hearing somebody, you know, whos, you know, a politician whos targeted by these attacks just saying that its, like, even if they believe its even if they know its fake, they still believe it because its something that they would do.BADANES: Sure.NESS: Thats, you know, as a form of propaganda, even relative to the canonical idea of deepfake that we have, this could be more effective, right. Like, just say its AI and then use it to, kind of, paint the picture of the opponent in any way you like.BADANES: Sure, and this gets into that, sort of, challenging space I think we find ourselves in right now, which is people dont know necessarily how to tell whats real or not. And the case youre describing, it has labeling, so that should tell you. But a lot of the content we come across online does not have labeling. And you cannot tell just based on your eyes whether images were generated by AI or whether theyre real. One of the things that I get asked a lot is, why cant we just build good AI to detect bad AI, right? Why dont we have a solution where I just take a picture and I throw it into a machine and it tells me thumbs-up or thumbs-down if this is AI generated or not? And the question around detection is a really tricky one. Im curious what you all think about, sort of, the question of, can detection solve this problem or not?NESS: So Ill mention one thing. So Madeleine mentioned an application of this technology called text to b-roll. And so what this is, technically speaking, what this is doing is youre taking real footage, you stick it in a database, its quote, unquote vectorized into these representations that the AI can understand, and then you say, hey, generate a video that illustrates this narrative for me. And you provide it the text narrative, and then it goes and pulls out a whole bunch of real video from a database and curates them into a short video that you could put on TikTok, for example. So this was a fully AI-generated product, but none of the actual content is synthetic.BADANES: Ah, right.NESS: So in that case, your quote, unquote AI detection tool is not going to work.DAEPP: Yeah, I mean, something that I find really fascinating any time that youre dealing with a sociotechnical system, righta technical system embedded in social contextis folks, you know, think that things are easy that are hard and things are hard that are easy, right. And so with a lot of the detections work, right, like if you put a deepfake detector out, you make that available to anyone, then what they can do is they can run a bunch of stuff by it, BADANES: Yeah.DAEPP: add a little bit of random noise, and then the deepfake detector doesnt work anymore. And so that detection, actually, technically becomes an arms race, you know. And were seeing now some detectors that, like, you know, work when youre not looking at a specific image or a specific piece of text but youre looking at a lot all at once. That seems more promising. But, just, this is a very, very technically difficult problem, and that puts us as researchers in a really tricky place because, you know, youre talking to folks who say, why cant you just solve this? If you put this out, then you have to put the detector out. And were like, thats actually not, thats not a technically feasible long-term solution in this space. And the solutions are going to be social and regulatory and, you know, changes in norms as well as technical solutions that maybe are about everything outside of AI, right.BADANES: Yeah.DAEPP: Not about fixing the AI system but fixing the context within which its used.BADANES: Its not just a technological solution. Theres more to it. Robert?NESS: So if somebody were to push back there, they could say, well, great; in the long term, maybe its an arms race, but in the short term, right, we can have solutions out there that, you know, at least in the next election cycle, we could maybe prevent some of these things from happening. And, again, kind of harkening back to cybersecurity, maybe if you make it hard enough, only the really dedicated, really high-funded people are going to be doing it rather than, you know, everybody who wants to throw a bunch of deepfakes on the internet. But the problem still there is that it focuses really on video and images, right.BADANES: Yeah. What about audio?NESS: What about audio? And what about text? So BADANES: Yeah. Those are hard. I feel like weve talked a lot about definitions and theoretical, but I want to make sure we talk more about what you guys saw and researched and understood on the ground, in particular, your trips to India and Taiwan and even if you want to reflect on how those compare to the US environment. What did you actually uncover? What surprised you? What was different between those countries?DAEPP: Yeah, I mean, right, so Taiwan both of these places are young democracies. And thats really interesting, right. So like in Taiwan, for example, when people vote, they vote on paper. And anybody can go watch. Thats part of their, like, security strategies. Like, anyone around the world can just come and watch. People come from far. They fly in from Canada and Japan and elsewhere just to watch Taiwanese people vote. And then similarly in India, theres this rule where you have to be walking distance from your polling place, and so the election takes two months. And, like, your polling places move from place to place, and sometimes, it arrives on an elephant. And so these were really interesting places to, like, I as an American, just, like, found it very, very fascinating to and important to be outside of the American context. You know, we just take for granted that how we do democracy is how other people do it. But Taiwan was very much a joint, like, civil societygovernment everyday response to this challenge of having a lot of efforts to manipulate public opinion happening with, you know, real-world speeches, with AI, with anything that you can imagine. You know, and I think the Microsoft Threat Analysis Center released a report documenting some of the, sort of, video stuff[2]. Theres a use of AI to create videos the night before the election, things like this. But then India is really thinking of so India, right, its the worlds biggest democracy, right. Like, nearly a billion people were eligible to vote.BADANES: Yeah.NESS: And arguably the most diverse, right?DAEPP: Yeah, arguably the most diverse in terms of languages, contexts. And its also positioning itself as the AI laboratory for the Global South. And so folks, including folks at the MSR (Microsoft Research) Bangalore lab, are leaders in thinking about representing low-resource languages, right, thinking about cultural representation in AI models. And so there you have all of these technologists who are really trying to innovate and really trying to think about whats the next clever application, whats the next clever use. And so that, sort of, that taxonomy that we talked about, like, I think just every week, every interview, we, sort of, had new things to add because folks there were just constantly trying all different kinds of ways of engaging with the public.NESS: Yeah, I think for me, in India in particular, you know, India is an engineering culture, right. In terms of, like, the professional culture there, theyre very, kind of, engineering skewed. And so I think one of the bigger surprises for me was seeing people who were very experienced and effective campaign operatives, right, people who would go and, you know, hit the pavement; do door knocking; kind of, segment neighborhoods by demographics and voter block, these people were also, you know, graduated in engineering from an IIT (Indian Institute of Technology), BADANES: Sure.NESS: right, and so [LAUGHS] so they were happy to pick up these tools and leverage them to support their expertise in this work, and so some of the, you know, I think a lot of the narrative that we tell ourselves in AI is how its going to be, kind of, replacing people in doing their work. But what I saw in India was that people who were very effective had a lot of domain expertise that you couldnt really automate away and they were the ones who are the early adopters of these tools and were applying it in ways that I think were behind on in terms of, you know, ideas in the US.BADANES: Yeah, I mean, theres, sort of, this sentiment that AI only augments existing problems and can enhance existing solutions, right. So were not great at translation tools, but AI will make us much better at that. But that also can then be weaponized and used as a tool to deceive people, which propaganda is not new, right? Were only scaling or making existing problems harder, or adversaries are trying to weaponize AI to build on things theyve already been doing, whether thats cyberattacks or influence operations. And while the three of us are in different roles, we do work for the same company. And its a large technology company that is helping bring AI to the world. At the same time, I think there are some responsibilities when we look at, you know, bad actors who are looking to manipulate our products to create and spread this kind of deceptive media, whether its in elections or in other cases like financial fraud or other ways that we see this being leveraged. Im curious what you all heard from others when youve been doing your research and also what you think our responsibilities are as a big tech company when it comes to keeping actors from using our products in those ways.DAEPP: You know, when I started using GPT-4, one of the things I did was I called my parents, and I said, if you hear me on a phone call, BADANES: Yeah.DAEPP: like, please double check. Ask me things that only I would know. And when I walk around Building 99, which is, kind of, a storied building in which a lot of Microsoft researchers work, everybody did that call. We all called our parents.BADANES: Interesting.DAEPP: Or, you know, we all checked in. So just as, like, we have a responsibility to the folks that we care about, I think as a company, that same, sort of, like, raising literacy around the types of fraud to expect and how to protect yourself from themI think that gets back to that fraud space that we talked aboutand, you know, supporting law enforcement, sharing what needs to be shared, I think that without question is a space that we need to work in. I will say a lot of the folks we talked with, they were using Llama on a local GPU, right.BADANES: OK.DAEPP: They were using open-source models. They were sometimes they were testing out Phi. They would use Phi, Grok, Llama, like anything like that. And so that raises an interesting question about our guardrails and our safety practices. And I think there, we have an, like, our obligation and our opportunity actually is to set the standard, right. To say, OK, like, you know, if you use local Llama and it spouts a bunch of stuff about voter suppression, like, you can get in trouble for that. And so what does it mean to have a safe AI that wins in the marketplace, right? Thats an AI that people can feel confident and comfortable about using and one thats societally safe but also personally safe. And I think thats both a challenge and a real opportunity for us.BADANES: Yeah oh, go ahead, Robert, yeah NESS: Going back to the point about fraud. It was this year, in January, when that British engineering firm Arup, when somebody used a deepfake to defraud that company of about $25 million, BADANES: Yeah.NESS: their Hong Kong office. And after that happened, some business managers in Microsoft reached out to me regarding a major client who wanted to start red teaming. And by red teaming, I mean intentionally targeting your executives and employees with these types of attacks in order to figure out where your vulnerabilities as an organization are. And I think, yeah, it got me thinking like, wow, I would, you know, can we do this for my dad? [LAUGHS] Because I think that was actually a theme that came out from a lot of this work, which was, like, how can we empower the people who are really on the frontlines of defending democracy in some of these places in terms of the tooling there? So we talked about, say, AI detection tools, but the people who are actually doing fact-checking, theyre looking more than at just the video or the images; theyre actually looking at a, kind of, holistic taking a holistic view of the news story and doing some proper investigative journalism to see if something is fake or not.BADANES: Yeah.NESS: And so I think as a company who creates products, can we take a more of a product mindset to building tools that support that entire workflow in terms of fact-checking or investigative journalism in the context of democratic outcomes BADANES: Yeah.NESS: where maybe looking at individual deepfake content is just a piece of that.BADANES: Yeah, you know, I think theres a lot of parallels here to cybersecurity. Thats also what weve found, is this idea that, first of all, the no silver bullet, as we were talking about earlier with the detection piece. Like, you cant expect your system to be secure just because you have a firewall, right. You have to have this, like, defense in-depth approach where you have lots of different layers. And one of those layers has been on the literacy side, right. Training and teaching people not to click on a phishing link, understanding that they should scroll over the URL. Like, these are efforts that have been taken up, sort of, in a broad societal sense. Employers do it. Big tech companies do it. Governments do it through PSAs and other things. So theres been a concerted effort to get a population who might not have been aware of the fact that they were about to be scammed to now know not to click on that link. I think, you know, you raised the point about literacy. And I think theres something to be said about media literacy in this space. Its both AI literacyunderstanding what it isbut also understanding that people may try to defraud you. And whether that is in the political sense or in the financial sense, once you have that, sort of, skill set in place, youre going to be protected. One thing that Ive heard, though, as I have conversations about this challenge Ive heard a couple things back from people specifically in civil society. One is not to put the impetus too much on the end consumer, which I think Im hearing that we also recognize theres things that we as technology companies should be focusing on. But the other thing is the concern that in, sort of, the long run, were going to all lose trust in everything we see anyway. And Ive heard some people refer to that as the trust deficit. Have you all seen anything promising in the space to give you a sense around, can we ever trust what were looking at again, or are we actually just training everyone to not believe anything they see? Which I hope is not the case. I am an optimist. But Id love to hear what you all came across. Are there signs of hope here where we might actually have a place where we can trust what we see again?DAEPP: Yeah. So two things. There is this phenomenon called the liars dividend, right, BADANES: Sure, yeah.DAEPP: which is where that if you educate folks about how AI can be used to create fake clips, fake audio clips, fake videos, then if somebody has a real audio clip, a real video, they can claim that its AI. And I think we talk, you know, again, this is, like, in a US-centric space, we talk about this with politicians, but the space in which this is really concerning, I think, is war crimes, right BADANES: Oh, yeah.DAEPP: I think are these real human rights infractions where you can prevent evidence from getting out or being taken seriously. And we do see that right after invasions, for example, these days. But this is actually a space like, I just told you, like, oh, like, detection is so hard and not technically, like, thatll be an arms race! But actually, there is this wonderful project, Project Providence, that is a Microsoft collaboration with a company called Truepic that its, like, an app, right. And what happens is when you take a photo using this app, it encrypts the, you know, hashes the GPS coordinates where the photo was taken, the time, the day, and uploads that with the pixels, with the image, to Azure. And then later, when a journalist goes to use that image, they can see that the pixels are exactly the same, and then they can check the location and they can confirm the GPS. And this actually meets evidentiary standards for the UN human rights tribunal, right.BADANES: Right.DAEPP: So this is being used in Ukraine to document war crimes. And so, you know, what if everybody had that app on their phone? That means you dont you know, most photos you take, you can use an AI tool and immediately play with. But in that particular situation where you need to confirm provenance and you need to confirm that this was a real event that happened, that is a technology that exists, and I think folks like the C2PA coalition (Coalition for Content Provenance and Authenticity) can make that happen across hardware providers.NESS: And I think the challenge for me is, we cant separate this problem from some of the other, kind of, fundamental problems that we have in our media environment now, right. So, for example, if I go on to my favorite social media app and I see videos from some conflicts around the world, and these videos could be not AI generated and I still could be, you know, the target of some PR campaign to promote certain content and suppress other ones. The videos could be authentic videos, but not actually be accurate depictions of what they claim to be. And so I think that this is a the AI presents a complicating factor in an already difficult problem space. And I think, you know, trying to isolate these different variables and targeting them individually is pretty tricky. I do think that despite the liars dividend that media literacy is a very positive area to, kind of, focus energy BADANES: Yeah.NESS: in the sense that, you know, you mentioned earlier, like, using this term fraud, again, going back to this analogy with cybersecurity and cybercrime, that it tends to resonate with people. We saw that, as well, especially in Taiwan, didnt we, Madeleine? Well, in India, too, with the sextortion fears. But in Taiwan, a lot of just cybercrime in terms of defrauding people of money. And one of the things that we had observed there was that talking about generative AI in the context of elections was difficult to talk to people about it because people, kind of, immediately went into their political camps, right.BADANES: Yeah.NESS: And so you had to, kind of, penetrate you know, people were trying to, kind of, suss out which side you were on when youre trying to educate them about this topic.BADANES: Sure.NESS: But if you talk tobut everybodys, like, fraud itself is a lot less partisan.BADANES: Yeah, its a neutral term.NESS: Exactly. And so it becomes a very useful way to, kind of, get these ideas out there.BADANES: Thats really interesting. And I love the provenance example because it really gets to the question about authenticity. Like, where did something come from? What is the origin of that media? Where has it traveled over time? And if AI is a component of it, then thats a noted fact. But it doesnt put us into the space of AI or not AI, which I think is where a lot of the, sort of, labeling has gone so far. And I understand the instinct to do that. But I like the idea of moving more towards how do you know more about an image of which whether there was AI involved or not is a component but does not have judgment. That does not make the picture good or bad. It doesnt make it true or false. Its just more information for you to consume. And then, of course, the media literacy piece, people need to know to look for those indicators and want them and ask for them from the technology company. So I think thats a good, thats a good silver lining. You gave me the light at the end of the tunnel I think I was looking for on the post-truth world. So, look, heres the big question. You guys have been spending this time focusing on AI and democracy in this big, massive global election year. There was a lot of hype. [LAUGHS] There was a lot of hype. Lots of articles written about how this was going to be the AI election apocalypse. What say you? Was it? Was it not?NESS: I think it was, well, we definitely have documented cases where this happened. And Im wary of this question, particularly again from the cybersecurity standpoint, which is if you were not the victim of a terrible hack that brought down your entire company, would you say, like, well, it didnt happen, so its not going to happen, right. You would never BADANES: Yeah.NESS: That would be a silly attitude to have, right. And also, you dont know what you dont know, right. So, like, a lot of the, you know, we mentioned sextortion; we mentioned these cybercrimes. A lot of these are small-dollar crimes, which means they dont get reported or they dont get reported for reasons of shame. And so we dont even have numbers on a lot of that. And we know that the political techniques are going to mirror the criminal techniques.BADANES: Yeah.NESS: And also, I worry about, say, down-ballot elections. Like, so much of, kind of, our election this year, a lot of the focus was on the national candidates, but, you know, if local poll workers are being targeted, if disinformation campaigns are being put out about local candidates, its not going get the kind of play in the national media such that you and I might hear about it. And so Im, you know, so Ill hand it off to Madeleine, but yeah.DAEPP: So absolutely agree with Roberts point, right. If your child was affected by sextortion, if you are a country that had an audio clip go viral, this was the deepfake deluge for you, right. That said, something that happened, you know, in India as in the United States, there were major prosecutions very early on, right.BADANES: Yeah.DAEPP: So in India, there was a video. It turned out not to be a deepfake. It turned out to be a cheap fake, to your point about, you know, the question isnt whether theres AI involved; the question is whether this is an attempt to defraud. And five people were charged for this video.BADANES: Yeah.DAEPP: And in the United States, right, those Biden robocalls using Bidens voice to tell folks not to vote, like, that led to a million-dollar fine, I think, for the telecoms and $6 million for the consultant who created that. And when we talk to people in India, you know, people who work in this space, they said, well, Im not going to do that; like, Im going to focus on other things. So internal actors pay attention to these things. That really changes what people do and how they do it. And so that, I do think the work that your team did, right, to educate candidates about looking out for the stuff, the work that the MTAC (Microsoft Threat Analysis Center) did to track usage and report it, all of that, I think, was, actually, those interventions, I think, worked. I think they were really important, and I do think that what we are this absence of a deluge is actually a huge number of people making a very concerted effort to prevent it from happening.BADANES: Thats encouraging.NESS: Madeleine, you made a really important point that this deterrence from prosecution, its effective for internal actors, BADANES: Yeah.DAEPP: Yeah, thats right.NESS: right. So for foreign states who are trying to interfere with other peoples elections, the fear of prosecution is not going to be as much of a deterrent.BADANES: That is true. I will say what we saw in this election cycle, in particular in the US, was a concerted effort by the intelligence community to call out and name nation-state actors who were either doing cyberattacks or influence operations, specific videos that they identified, whether there was AI involved or not. I think that level of communication with the public while maybe doesnt lead to those actors going to jailmaybe somedaybut does in fact lead to a more aware public and therefore hopefully a less effective campaign. If people on the other end and its a little bit into the literacy space, and its something that weve seen government again in this last cycle do very effectively, to name and shame essentially when they see these things in part, though, to make sure voters are aware of whats happening. Were not quite through this big global election year; we have a couple more elections before we really hit the end of the year, but its winding down. What is next for you all? Are you all going to continue this work? Are you going build on it? What comes next?DAEPP: So our research in India actually wasnt focused specifically on elections. It was about AI and digital communications.BADANES: Ahh.DAEPP: Because, you know, again, like India is this laboratory.BADANES: Sure.DAEPP: And I think what we learned from that work is that, you know, this is going to be a part of our digital communications and our information system going forward without question. And the question is just, like, what are the viable business models, right? What are the applications that work? And again, that comes back to making sure that whatever AI you know, people when they build AI into their entire, you know, newsletter-writing system, when they build it into their content production, that they can feel confident that its safe and that it meets their needs and that theyre protected when they use it. And similarly, like, what are those applications that really work, and how do you empower those lead users while mitigating those harms and supporting civil society and mitigating those harms? I think thats an incredible, like, thatsas a researcherthats, you know, thats a career, right.BADANES: Yeah.DAEPP: Thats a wonderful research space. And so I think understanding how to support AI that is safe, that enables people globally to have self-determination in how models represent them, and that is usable and powerful, I think thats broadly BADANES: Where this goes.DAEPP: what I want to drive.BADANES: Robert, how about you?NESS: You know, so I mentioned earlier on these AI alignment issues.BADANES: Yeah.NESS: And I was really fascinated by how local and contextual those issues really are. So to give an example from Taiwan, we train these models on training data that we find from the internet. Well, when it comes to, say, Mandarin Chinese, you can imagine the proportion of content, of just the quantity of content, on the internet that comes from China is a lot more than the quantity that comes from Taiwan. And of course, whats politically correct in China is different from whats politically correct in Taiwan. And so when we were talking to Taiwanese, a lot of people had these concerns about, you know, having these large language models that reflected Taiwanese values. We heard the same thing in India about just people on different sides of the political spectrum and, kind of, looking at a YouTuber in India had walked us through this how, for example, a founding father of India, there was a disparate literature in favor of this person and some more critical of this person, and he had spent time trying to suss out whether GPT-4 was on one side or the other.BADANES: Oh. Whose side are you on? [LAUGHS]NESS: Right, and so I think for our alignment research at Microsoft Research, this becomes the beginning of, kind of, a very fruitful way of engaging with local stakeholders and making sure that we can reflect these concerns in the models that we develop and deploy.BADANES: Yeah. Well, first, I just want to thank you guys for all the work youve done. This is amazing. Weve really enjoyed partnering with you. Ive loved learning about the research and the efforts, and Im excited to see what you do next. I always want to end these kinds of conversations on a more positive note, because weve talked a lot about the weaponization of AI and, you know, how ethical areas that are confusing and but I am sure at some point in your work, you came across really positive use cases of AI when it comes to democracy, or at least I hope you have. [LAUGHS] Do you have any examples or can you leave us with something about where you see either it going or actively being used in a way to really strengthen democratic processes or systems?DAEPP: Yeah, I mean, there is just a big paper in Science, right, which, as researchers, when something comes out in Science, you know your field is about to change, right, BADANES: Yeah.DAEPP: showing that an AI model in, like, political deliberations, small groups of UK residents talking about difficult topics like Brexit, you know, climate crisis, difficult topics, that in these conversations, an AI moderator created, like, consensus statements that represented the majority opinion, still showed the minority opinion, but that participants preferred to a human-written statement and in fact preferred to their original opinion.BADANES: Wow.DAEPP: And that this, you know, not only works in these randomized controlled trials but actually works in a real citizens deliberation. And so that potential of, like, carefully fine-tuned, like, carefully aligned AI to actually help people find points of agreement, thats a really exciting space.BADANES: So next time my kids are in a fight, Im going to point them to Copilot and say, work with Copilot to mediate. [LAUGHS] No, thats really, really interesting. Robert, how about you?NESS: She, kind of, stole my example. [LAUGHTER] But Ill take it from a different perspective. So, yes, like how these technologies can enable people to collaborate and ideally, I think, from a democratic standpoint, at a local level, right. So, I mean, I think so much of our politics were, kind of, focused at the national-level campaign, but our opportunity to collaborate is much more were much more easily we can collaborate much more easily with people who are in our local constituencies. And I think to myself about, kind of, like, the decline particularly of local newspapers, local media.BADANES: Right.NESS: And so I wonder, you know, can these technologies help address that problem in terms of just, kind of, information about, say, your local community, as well as local politicians. And, yeah, and to Madeleines point, so Madeleine started the conversation talking about her background in urban planning and some of the work she did, you know, working on a local level with local officials to bring technology to the level of cities. And I think, like, well, you know, politics are local, right. So, you know, I think that thats where theres a lot of opportunity for improvement.BADANES: Well, Robert, you just queued up a topic for a whole other podcast because our team also does a lot of work around journalism, and I will say we have seen that AI at the local level with local news is really a powerful tool that were starting to see a lot of appetite and interest for in order to overcome some of the hurdles they face right now in that industry when it comes to capacity, financing, you know, not able to be in all of the places they want to be at once to make sure that theyre reporting equally across the community. This is, like, a perfect use case for AI, and were starting to see folks who are really using it. So maybe well come back and do this again another time on that topic. But I just want to thank you both, Madeleine and Robert, for joining us today and sharing your insights. This was really a fascinating conversation. I know I learned a lot. I hope that our listeners learned a lot, as well.[MUSIC]And, listeners, I hope that you tune in for more episodes of Ideas, where we continue to explore the technologies shaping our future and the big ideas behind them. Thank you, guys, so much.DAEPP: Thank you.NESS: Thank you.[MUSIC FADES][1] The video generation model Sora was released publicly earlier this month (opens in new tab).[2] For a summary of and link to the report, see the Microsoft On the Issues blog post China tests US voter fault lines and ramps AI content to boost its geopolitical interests (opens in new tab).
    0 Commentaires ·0 Parts ·204 Vue
  • AIOpsLab: Building AI agents for autonomous clouds
    www.microsoft.com
    In our increasingly complex digital landscape, enterprises and cloud providers face significant challenges in the development, deployment, and maintenance of sophisticated IT applications. The broad adoption of microservices and cloud-based serverless architecture has streamlined certain aspects of application development while simultaneously introducing a host of operational difficulties, particularly in fault diagnosis and mitigation. These complexities can result in outages, which have the potential to cause major business disruptions, underscoring the critical need for robust solutions that ensure high availability and reliability in cloud services. As the expectation for five-nines availability grows, organizations must navigate the intricate web of operational demands to maintain customer satisfaction and business continuity.To tackle these challenges, recent research on using AIOps agents for cloud operationssuch as AI agents for incident root cause analysis (RCA) or triaginghas relied on proprietary services and datasets. Other prior works use frameworks specific to the solutions that they are building, or ad hoc and static benchmarks and metrics that fail to capture the dynamic nature of real-world cloud services. Users developing agents for cloud operations tasks with Azure AI Agent Service can evaluate and improve them using AIOpsLab. Furthermore, current approaches do not agree on standard metrics or a standard taxonomy for operational tasks. This calls for a standardized and principled research framework for building, testing, comparing, and improving AIOps agents. The framework should allow agents to interact with realistic service operation tasks in a reproducible manner. It must be flexible in extending to new applications, workloads, and faults. Importantly, it should go beyond just evaluating the AI agents and enabling users to improve the agents themselves; for example, by providing sufficient observability and even serving as a training environment (gym) to generate samples to learn on.We developed AIOpsLab, a holistic evaluation framework for researchers and developers, to enable the design, development, evaluation, and enhancement of AIOps agents, which also serves the purpose of reproducible, standardized, interoperable, and scalable benchmarks. AIOpsLab is open sourced at GitHub (opens in new tab) with the MIT license, so that researchers and engineers can leverage it to evaluate AIOps agents at scale. The AIOpsLab research paper has been accepted at SoCC24 (the annual ACM Symposium on Cloud Computing).Figure 1. System architecture of AIOpsLab.Agent-cloud interface (ACI)AIOpsLab strictly separates the agent and the application service using an intermediate orchestrator. It provides several interfaces for other system parts to integrate and extend. First, it establishes a session with an agent to share information about benchmark problems: (1) the problem description, (2) instructions (e.g., response format), and (3) available APIs to call as actions.The APIs are a set of documented tools, e.g., get logs, get metrics, and exec shell, designed to help the agent solve a task. There are no restrictions on the agents implementation; the orchestrator poses problems and polls it for the next action to perform given the previous result. Each action must be a valid API call, which the orchestrator validates and carries out. The orchestrator has privileged access to the deployment and can take arbitrary actions (e.g., scale-up, redeploy) using appropriate tools (e.g., helm, kubectl) to resolve problems on behalf of the agent. Lastly, the orchestrator calls workload and fault generators to create service disruptions, which serve as live benchmark problems. AIOpsLab provides additional APIs to extend to new services and generators.Example shows how to onboard an agent to AIOpsLabfrom aiopslab import Orchestratorclass Agent: def __init__(self, prob, instructs, apis): self.prompt = self.set_prompt(prob, instructs, apis) self.llm = GPT4() async def get_action(self, state: str) -> str: return self.llm.generate(self.prompt + state)#initialize the orchestratororch = Orchestrator()pid = "misconfig_app_hotel_res-mitigation-1"prob_desc, instructs, apis = orch.init_problem(pid)#register and evaluate the agentagent = Agent(prob_desc, instructs, apis)orch.register_agent(agent, name="myAgent")asyncio.run(orch.start_problem(max_steps=10))ServiceAIOpsLab abstracts a diverse set of services to reflect the variance in production environments. This includes live, running services that are implemented using various architectural principles, including microservices, serverless, and monolithic.We also leverage open-sourced application suites such as DeathStarBench as they provide artifacts, like source code and commit history, along with run-time telemetry. Adding tools like BluePrint can help AIOpsLab scale to other academic and production services.The workload generator in AIOpsLab plays a crucial role by creating simulations of both faulty and normal scenarios. It receives specifications from the orchestrator, such as the task, desired effects, scale, and duration. The generator can use a model trained on real production traces to generate workloads that align with these specifications. Faulty scenarios may simulate conditions like resource exhaustion, exploit edge cases, or trigger cascading failures, inspired by real incidents. Normal scenarios mimic typical production patterns, such as daily activity cycles and multi-user interactions. When various characteristics (e.g., service calls, user distribution, arrival times) can lead to the desired effect, multiple workloads can be stored in the problem cache for use by the orchestrator. In coordination with the fault generator, the workload generator can also create complex fault scenarios with workloads.Fault generatorAIOpsLab has a novel push-button fault generator designed for generic applicability across various cloud scenarios. Our approach integrates application and domain knowledge to create adaptable policies and oracles compatible with AIOps scenarios. This includes fine-grained fault injection capable of simulating complex failures inspired by production incidents. Additionally, it can inject faults at various system levels, exposing root causes while maintaining semantic integrity and considering interdependencies between cloud microservices. The fault injectors versatility can enhance the reliability and robustness of cloud systems by enabling thorough testing and evaluation of AIOps capabilities.Microsoft Research BlogMicrosoft Research Forum Episode 3: Globally inclusive and equitable AI, new use cases for AI, and moreIn the latest episode of Microsoft Research Forum, researchers explored the importance of globally inclusive and equitable AI, shared updates on AutoGen and MatterGen, presented novel use cases for AI, including industrial applications and the potential of multimodal models to improve assistive technologies.Read moreOpens in a new tab ObservabilityAIOpsLab is equipped with an extensible observability layer designed to provide comprehensive monitoring capabilities across various system layers for any AIOps tool. AIOpsLab collects a wide array of telemetry data, including (1) traces from Jaeger detailing the end-to-end paths of requests through distributed systems, (2) application logs formatted and recorded by Filebeat and Logstash, and (3) system metrics monitored by Prometheus. Additionally, AIOpsLab also captures lower-level system information such as syscall logs and cluster information. As mentioned, we handle potential data overload through flexible APIs to tune the telemetry data relevant to the AIOps tools.AIOpsLab currently supports four key tasks within the AIOps domain. These tasks include incident detection, localization, root cause diagnosis, and mitigation. Additionally, we support several popular agent frameworks: React, Autogen, and TaskWeaver. Two key insights from the study emphasize the importance of observability and well-designed ACI. Observability is crucial for clear root-cause diagnosis, for instance, pinpointing a misconfigured API gateway can be vital in preventing service downtime.Flexibility is another critical factor; the ability to execute arbitrary shell commands allowed for effective troubleshooting in real-time scenarios. Lastly, robust error handling is essentialproviding agents with high-quality feedback on execution barriers, such as a failed database connection, ensures swift resolution and continuous improvement.Example of ACI on diagnosis taskAgent: I need to first look at the logs to understand if there are any errors or warnings that could indicate problems. Action: ``` get_logs("compose-post-service", "test-social-network") ```Service: [2024-Aug-04 23:18:49.365494] <info>: Thrift: Sun Aug 4 23:19:19 2024 TSocket::open() connect() <Host: user-service Port: 9090>: Connection refused Agent: unable to connect to the `user-service` at port 9090, even though the pod is running. This suggests a possible network issue or misconfiguration in service discoveryNext stepsThis research project adopts Microsoft security standards and Responsible AI principles, and we envision this research evolving into a vital resource for organizations aiming to optimize their IT operations. Additionally, we plan to collaborate with various generative AI teams to incorporate AIOpsLab as a benchmark scenario for evaluating state-of-the-art models. By doing so, we aim to foster innovation and encourage the development of more advanced AIOps solutions. This research is essential not only for IT professionals but also for anyone invested in the future of technology, as it has the potential to redefine how organizations manage operations, respond to incidents, and ultimately serve their customers in an increasingly automated world.AcknowledgementsWe would like to thank Yinfang Chen, Manish Shetty, Yogesh Simmhan, Xuchao Zhang, Jonathan Mace, Dax Vandevoorde, Pedro Las-Casas, Shachee Mishra Gupta, and Suman Nath, for contributing to this project.Opens in a new tab
    0 Commentaires ·0 Parts ·204 Vue
  • 0 Commentaires ·0 Parts ·206 Vue
  • 0 Commentaires ·0 Parts ·204 Vue
Plus de lecture