• Hello, wonderful people! Today, I want to take a moment to celebrate the incredible advancements happening in the world of 3D printing, especially highlighted at the recent Paris Air Show!

    What an exciting week it has been for the additive manufacturing industry! The #3DExpress has been buzzing with news, showcasing how innovation and creativity are taking flight together! The Paris Air Show is not just a platform for the latest planes; it’s a stage for groundbreaking technologies that promise to revolutionize our future!

    Imagine a world where designing and producing complex aircraft parts becomes not only efficient but also sustainable! The use of 3D printing is paving the way for a greener future, reducing waste and making manufacturing more accessible than ever before. The possibilities are endless, and it’s invigorating to witness how these technologies can transform entire industries! 💪🏽

    During the show, we saw some amazing demonstrations of 3D printed components that are not only lightweight but also incredibly strong. This is a game-changer for aerospace engineering! Every layer printed brings us closer to smarter, more efficient air travel, and who wouldn’t want to be part of that journey?

    Let’s not forget the talented minds behind these innovations! The engineers, designers, and creators are the true superheroes, pushing boundaries and inspiring the next generation to dream bigger! Their passion and dedication remind us that with hard work and determination, we can reach for the stars!

    If you’ve ever doubted the power of creativity and technology, let this be your reminder: the future is bright, and we have the tools to shape it! So, let’s stay curious, keep pushing forward, and embrace every opportunity that comes our way! Together, we can soar to new heights!

    Let’s keep the conversation going about how #3D printing and additive manufacturing can change our world. What are your thoughts on these incredible innovations? Share your ideas and let’s inspire each other!

    #3DPrinting #Innovation #ParisAirShow #AdditiveManufacturing #FutureOfFlight
    🌟✨ Hello, wonderful people! Today, I want to take a moment to celebrate the incredible advancements happening in the world of 3D printing, especially highlighted at the recent Paris Air Show! 🚀🎉 What an exciting week it has been for the additive manufacturing industry! The #3DExpress has been buzzing with news, showcasing how innovation and creativity are taking flight together! 🌈✈️ The Paris Air Show is not just a platform for the latest planes; it’s a stage for groundbreaking technologies that promise to revolutionize our future! Imagine a world where designing and producing complex aircraft parts becomes not only efficient but also sustainable! 🌍💚 The use of 3D printing is paving the way for a greener future, reducing waste and making manufacturing more accessible than ever before. The possibilities are endless, and it’s invigorating to witness how these technologies can transform entire industries! 💪🏽✨ During the show, we saw some amazing demonstrations of 3D printed components that are not only lightweight but also incredibly strong. This is a game-changer for aerospace engineering! 🛠️🔧 Every layer printed brings us closer to smarter, more efficient air travel, and who wouldn’t want to be part of that journey? 🌟🌍 Let’s not forget the talented minds behind these innovations! The engineers, designers, and creators are the true superheroes, pushing boundaries and inspiring the next generation to dream bigger! 💖🔭 Their passion and dedication remind us that with hard work and determination, we can reach for the stars! 🌟 If you’ve ever doubted the power of creativity and technology, let this be your reminder: the future is bright, and we have the tools to shape it! So, let’s stay curious, keep pushing forward, and embrace every opportunity that comes our way! Together, we can soar to new heights! 🚀💖 Let’s keep the conversation going about how #3D printing and additive manufacturing can change our world. What are your thoughts on these incredible innovations? Share your ideas and let’s inspire each other! 🌈✨ #3DPrinting #Innovation #ParisAirShow #AdditiveManufacturing #FutureOfFlight
    #3DExpress: La fabricación aditiva en el Paris Air Show
    ¿Qué ha ocurrido esta semana en la industria de la impresión 3D? En el 3DExpress de hoy te ofrecemos un resumen rápido con las noticias más destacadas de los últimos días. En primer lugar, el Paris Air Show es esta…
    Like
    Love
    Wow
    Sad
    Angry
    287
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Ankur Kothari Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns.
    But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic.
    This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results.
    Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.

     
    Ankur Kothari Q&A Interview
    1. What types of customer engagement data are most valuable for making strategic business decisions?
    Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns.
    Second would be demographic information: age, location, income, and other relevant personal characteristics.
    Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews.
    Fourth would be the customer journey data.

    We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data.

    2. How do you distinguish between data that is actionable versus data that is just noise?
    First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance.
    Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in.

    You also want to make sure that there is consistency across sources.
    Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory.
    Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy.

    By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions.

    3. How can customer engagement data be used to identify and prioritize new business opportunities?
    First, it helps us to uncover unmet needs.

    By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points.

    Second would be identifying emerging needs.
    Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly.
    Third would be segmentation analysis.
    Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies.
    Last is to build competitive differentiation.

    Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions.

    4. Can you share an example of where data insights directly influenced a critical decision?
    I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings.
    We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms.
    That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs.

    That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial.

    5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time?
    When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences.
    We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments.
    Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content.

    With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns.

    6. How are you doing the 1:1 personalization?
    We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer.
    So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer.
    That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience.

    We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers.

    7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service?
    Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved.
    The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments.

    Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention.

    So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization.

    8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights?
    I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights.

    Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement.

    Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant.
    As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively.
    So there’s a lack of understanding of marketing and sales as domains.
    It’s a huge effort and can take a lot of investment.

    Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing.

    9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data?
    If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge.
    Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side.

    Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important.

    10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before?
    First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do.
    And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations.
    The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it.

    Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one.

    11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations?
    We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI.
    We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals.

    We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization.

    12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data?
    I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points.
    Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us.
    We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels.
    Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms.

    Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps.

    13. How do you ensure data quality and consistency across multiple channels to make these informed decisions?
    We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies.
    While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing.
    We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats.

    On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically.

    14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years?
    The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices.
    Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities.
    We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases.
    As the world is collecting more data, privacy concerns and regulations come into play.
    I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies.
    And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture.

    So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.

     
    This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #ankur #kothari #qampampa #customer #engagement
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage. #ankur #kothari #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question (and many others), we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    Like
    Love
    Wow
    Angry
    Sad
    478
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Stanford Doctors Invent Device That Appears to Be Able to Save Tons of Stroke Patients Before They Die

    Image by Andrew BrodheadResearchers have developed a novel device that literally spins away the clots that block blood flow to the brain and cause strokes.As Stanford explains in a blurb, the novel milli-spinner device may be able to save the lives of patients who experience "ischemic stroke" from brain stem clotting.Traditional clot removal, a process known as thrombectomy, generally uses a catheter that either vacuums up the blood blockage or uses a wire mesh to ensnare it — a procedure that's as rough and imprecise as it sounds. Conventional thrombectomy has a very low efficacy rate because of this imprecision, and the procedure can result in pieces of the clot breaking off and moving to more difficult-to-reach regions.Thrombectomy via milli-spinner also enters the brain with a catheter, but instead of using a normal vacuum device, it employs a spinning tube outfitted with fins and slits that can suck up the clot much more meticulously.Stanford neuroimaging expert Jeremy Heit, who also coauthored a new paper about the device in the journal Nature, explained in the school's press release that the efficacy of the milli-spinner is "unbelievable.""For most cases, we’re more than doubling the efficacy of current technology, and for the toughest clots — which we’re only removing about 11 percent of the time with current devices — we’re getting the artery open on the first try 90 percent of the time," Heit said. "This is a sea-change technology that will drastically improve our ability to help people."Renee Zhao, the senior author of the Nature paper who teaches mechanical engineering at Stanford and creates what she calls "millirobots," said that conventional thrombectomies just aren't cutting it."With existing technology, there’s no way to reduce the size of the clot," Zhao said. "They rely on deforming and rupturing the clot to remove it.""What’s unique about the milli-spinner is that it applies compression and shear forces to shrink the entire clot," she continued, "dramatically reducing the volume without causing rupture."Indeed, as the team discovered, the device can cut and vacuum up to five percent of its original size."It works so well, for a wide range of clot compositions and sizes," Zhao said. "Even for tough... clots, which are impossible to treat with current technologies, our milli-spinner can treat them using this simple yet powerful mechanics concept to densify the fibrin network and shrink the clot."Though its main experimental use case is brain clot removal, Zhao is excited about its other uses, too."We’re exploring other biomedical applications for the milli-spinner design, and even possibilities beyond medicine," the engineer said. "There are some very exciting opportunities ahead."More on brains: The Microplastics in Your Brain May Be Causing Mental Health IssuesShare This Article
    #stanford #doctors #invent #device #that
    Stanford Doctors Invent Device That Appears to Be Able to Save Tons of Stroke Patients Before They Die
    Image by Andrew BrodheadResearchers have developed a novel device that literally spins away the clots that block blood flow to the brain and cause strokes.As Stanford explains in a blurb, the novel milli-spinner device may be able to save the lives of patients who experience "ischemic stroke" from brain stem clotting.Traditional clot removal, a process known as thrombectomy, generally uses a catheter that either vacuums up the blood blockage or uses a wire mesh to ensnare it — a procedure that's as rough and imprecise as it sounds. Conventional thrombectomy has a very low efficacy rate because of this imprecision, and the procedure can result in pieces of the clot breaking off and moving to more difficult-to-reach regions.Thrombectomy via milli-spinner also enters the brain with a catheter, but instead of using a normal vacuum device, it employs a spinning tube outfitted with fins and slits that can suck up the clot much more meticulously.Stanford neuroimaging expert Jeremy Heit, who also coauthored a new paper about the device in the journal Nature, explained in the school's press release that the efficacy of the milli-spinner is "unbelievable.""For most cases, we’re more than doubling the efficacy of current technology, and for the toughest clots — which we’re only removing about 11 percent of the time with current devices — we’re getting the artery open on the first try 90 percent of the time," Heit said. "This is a sea-change technology that will drastically improve our ability to help people."Renee Zhao, the senior author of the Nature paper who teaches mechanical engineering at Stanford and creates what she calls "millirobots," said that conventional thrombectomies just aren't cutting it."With existing technology, there’s no way to reduce the size of the clot," Zhao said. "They rely on deforming and rupturing the clot to remove it.""What’s unique about the milli-spinner is that it applies compression and shear forces to shrink the entire clot," she continued, "dramatically reducing the volume without causing rupture."Indeed, as the team discovered, the device can cut and vacuum up to five percent of its original size."It works so well, for a wide range of clot compositions and sizes," Zhao said. "Even for tough... clots, which are impossible to treat with current technologies, our milli-spinner can treat them using this simple yet powerful mechanics concept to densify the fibrin network and shrink the clot."Though its main experimental use case is brain clot removal, Zhao is excited about its other uses, too."We’re exploring other biomedical applications for the milli-spinner design, and even possibilities beyond medicine," the engineer said. "There are some very exciting opportunities ahead."More on brains: The Microplastics in Your Brain May Be Causing Mental Health IssuesShare This Article #stanford #doctors #invent #device #that
    FUTURISM.COM
    Stanford Doctors Invent Device That Appears to Be Able to Save Tons of Stroke Patients Before They Die
    Image by Andrew BrodheadResearchers have developed a novel device that literally spins away the clots that block blood flow to the brain and cause strokes.As Stanford explains in a blurb, the novel milli-spinner device may be able to save the lives of patients who experience "ischemic stroke" from brain stem clotting.Traditional clot removal, a process known as thrombectomy, generally uses a catheter that either vacuums up the blood blockage or uses a wire mesh to ensnare it — a procedure that's as rough and imprecise as it sounds. Conventional thrombectomy has a very low efficacy rate because of this imprecision, and the procedure can result in pieces of the clot breaking off and moving to more difficult-to-reach regions.Thrombectomy via milli-spinner also enters the brain with a catheter, but instead of using a normal vacuum device, it employs a spinning tube outfitted with fins and slits that can suck up the clot much more meticulously.Stanford neuroimaging expert Jeremy Heit, who also coauthored a new paper about the device in the journal Nature, explained in the school's press release that the efficacy of the milli-spinner is "unbelievable.""For most cases, we’re more than doubling the efficacy of current technology, and for the toughest clots — which we’re only removing about 11 percent of the time with current devices — we’re getting the artery open on the first try 90 percent of the time," Heit said. "This is a sea-change technology that will drastically improve our ability to help people."Renee Zhao, the senior author of the Nature paper who teaches mechanical engineering at Stanford and creates what she calls "millirobots," said that conventional thrombectomies just aren't cutting it."With existing technology, there’s no way to reduce the size of the clot," Zhao said. "They rely on deforming and rupturing the clot to remove it.""What’s unique about the milli-spinner is that it applies compression and shear forces to shrink the entire clot," she continued, "dramatically reducing the volume without causing rupture."Indeed, as the team discovered, the device can cut and vacuum up to five percent of its original size."It works so well, for a wide range of clot compositions and sizes," Zhao said. "Even for tough... clots, which are impossible to treat with current technologies, our milli-spinner can treat them using this simple yet powerful mechanics concept to densify the fibrin network and shrink the clot."Though its main experimental use case is brain clot removal, Zhao is excited about its other uses, too."We’re exploring other biomedical applications for the milli-spinner design, and even possibilities beyond medicine," the engineer said. "There are some very exciting opportunities ahead."More on brains: The Microplastics in Your Brain May Be Causing Mental Health IssuesShare This Article
    Like
    Love
    Wow
    Sad
    Angry
    478
    2 Commentarii 0 Distribuiri 0 previzualizare
  • NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs

    Generative AI has reshaped how people create, imagine and interact with digital content.
    As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well.
    By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4.
    NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance.
    In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers.
    RTX-Accelerated AI
    NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs.
    Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution.
    To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one.
    SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs.
    FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup.
    Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch.
    The optimized models are now available on Stability AI’s Hugging Face page.
    NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July.
    TensorRT for RTX SDK Released
    Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers.
    Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time.
    With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature.
    The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview.
    For more details, read this NVIDIA technical blog and this Microsoft Build recap.
    Join NVIDIA at GTC Paris
    At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay.
    GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #nvidia #tensorrt #boosts #stable #diffusion
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #nvidia #tensorrt #boosts #stable #diffusion
    BLOGS.NVIDIA.COM
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion (SD) 3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kit (SDK) double performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time (JIT), on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8 (right) generates images in half the time with similar quality as FP16 (left). Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Love
    Wow
    Sad
    Angry
    482
    0 Commentarii 0 Distribuiri 0 previzualizare
  • How to choose a programmatic video advertising platform: 8 considerations

    Whether you’re an advertiser or a publisher, partnering up with the right programmatic video advertising platform is one of the most important business decisions you can make. More than half of U.S. marketing budgets are now devoted to programmatically purchased media, and there’s no indication that trend will reverse any time soon.Everybody wants to find the solution that’s best for their bottom line. However, the specific considerations that should go into choosing the right video programmatic advertising solution differ depending on whether you have supply to sell or are looking for an audience for your advertisements. This article will break down key factors for both mobile advertisers and mobile publishers to keep in mind as they search for a programmatic video advertising platform.Before we get into the specifics on either end, let’s recap the basic concepts.What is a programmatic video advertising platform?A programmatic video advertising platform combines tools, processes, and marketplaces to place video ads from advertising partners in ad placements furnished by publishing partners. The “programmatic” part of the term means that it’s all done procedurally via automated tools, integrating with demand side platforms and supply side platforms to allow advertising placements to be bid upon, selected, and displayed in fractions of a second.If a mobile game has ever offered you extra rewards for watching a video and you found yourself watching an ad for a related game a split second later, you’ve likely been on the user side of an advertising programmatic transaction. Now let’s take a look at what considerations make for the ideal programmatic video advertising platform for the other two main parties involved.4 points to help advertisers choose the best programmatic platformLooking for the best way to leverage your video demand side platform? These are four key points for advertisers to consider when trying to find the right programmatic video advertising platform.A large, engaged audienceOne of the most important things a programmatic video advertising platform can do for advertisers is put their creative content in front of as many people as possible. However, it’s not enough to just pass your content in front of the most eyeballs. It’s equally important for the platform to give you access to engaged audiences who are more likely to convert so you can make the most of your advertising dollar.Full-screen videos to grab attentionYou need every advantage you can get when you’re grappling for the attention of a busy mobile user. Your video demand side platform should prioritize full-screen takeovers when and where they make sense, making sure your content isn’t just playing unnoticed on the far side of the screen.A range of ad options that are easy to testYour video programmatic advertising partner should be able to offer a broad variety of creative and placement options, including interstitial and rewarded ads. It should also enable you to test, iterate, and optimize ads as soon as they’re put into rotation, ensuring your ad spend is meeting your targets and allowing for fast and flexible changes if needed.Simple access to supplyEven the most powerful programmatic video advertising platform is no good if it’s impractical to get running. Look for partners that allows instant access to supply through tried-and-true platforms like Google Display & Video 360, Magnite, and others. On top of that, you should seek out a private exchange to ensure access to premium inventory.4 points for publishers in search of the best programmatic platformYou work hard to make the best apps for your users, and you deserve to partner up with a programmatic video advertising platform that works hard too. Serving video ads that both keep users engaged and your profits rising can be a tricky needle to thread, but the right platform should make your part of the process simple and effective.A large selection of advertisersEncountering the same ads over and over again can get old fast — and diminish engagement. On top of that, a small selection of advertisers means fewer chances for your users to connect with an ad and convert — which means less revenue, too. The ideal programmatic video advertising platform will partner with thousands of advertisers to fill your placements with fresh, engaging content.Rewarded videos and offerwallsInterstitial video ads aren’t likely to disappear any time soon, but players strongly prefer other means of advertisement. In fact, 76% of US mobile gamers say they prefer rewarded videos over interstitial ads. Giving players the choice of when to watch ads, with the inducement of in-game rewards, can be very powerful — and an offerwall is another powerful way to put the ball in your player’s court.Easy supply-side SDK integrationThe time your developers spend integrating a new video programmatic advertising solution into your apps is time they could have spent making those apps more engaging for users. While any backend adjustment will naturally take some time to implement, your new programmatic partner should offer a powerful, industry-standard SDK to make the process fast and non-disruptive.Support for programmatic mediationMediators such as LevelPlay by ironSource automatically prioritize ad demand from multiple third-party networks, optimizing your cash flow and reducing work on your end. Your programmatic video advertising platform should seamlessly integrate with mediators to make the most of each ad placement, every time.Pick a powerful programmatic partnerThankfully, advertisers and publishers alike can choose one solution that checks all the above boxes and more. For advertisers, the ironSource Programmatic Marketplace will connect you with targeted audiences in thousands of apps that gel with your brand. For publishers, ironSource’s marketplace means a massive selection of ads that your users and your bottom line will love.
    #how #choose #programmatic #video #advertising
    How to choose a programmatic video advertising platform: 8 considerations
    Whether you’re an advertiser or a publisher, partnering up with the right programmatic video advertising platform is one of the most important business decisions you can make. More than half of U.S. marketing budgets are now devoted to programmatically purchased media, and there’s no indication that trend will reverse any time soon.Everybody wants to find the solution that’s best for their bottom line. However, the specific considerations that should go into choosing the right video programmatic advertising solution differ depending on whether you have supply to sell or are looking for an audience for your advertisements. This article will break down key factors for both mobile advertisers and mobile publishers to keep in mind as they search for a programmatic video advertising platform.Before we get into the specifics on either end, let’s recap the basic concepts.What is a programmatic video advertising platform?A programmatic video advertising platform combines tools, processes, and marketplaces to place video ads from advertising partners in ad placements furnished by publishing partners. The “programmatic” part of the term means that it’s all done procedurally via automated tools, integrating with demand side platforms and supply side platforms to allow advertising placements to be bid upon, selected, and displayed in fractions of a second.If a mobile game has ever offered you extra rewards for watching a video and you found yourself watching an ad for a related game a split second later, you’ve likely been on the user side of an advertising programmatic transaction. Now let’s take a look at what considerations make for the ideal programmatic video advertising platform for the other two main parties involved.4 points to help advertisers choose the best programmatic platformLooking for the best way to leverage your video demand side platform? These are four key points for advertisers to consider when trying to find the right programmatic video advertising platform.A large, engaged audienceOne of the most important things a programmatic video advertising platform can do for advertisers is put their creative content in front of as many people as possible. However, it’s not enough to just pass your content in front of the most eyeballs. It’s equally important for the platform to give you access to engaged audiences who are more likely to convert so you can make the most of your advertising dollar.Full-screen videos to grab attentionYou need every advantage you can get when you’re grappling for the attention of a busy mobile user. Your video demand side platform should prioritize full-screen takeovers when and where they make sense, making sure your content isn’t just playing unnoticed on the far side of the screen.A range of ad options that are easy to testYour video programmatic advertising partner should be able to offer a broad variety of creative and placement options, including interstitial and rewarded ads. It should also enable you to test, iterate, and optimize ads as soon as they’re put into rotation, ensuring your ad spend is meeting your targets and allowing for fast and flexible changes if needed.Simple access to supplyEven the most powerful programmatic video advertising platform is no good if it’s impractical to get running. Look for partners that allows instant access to supply through tried-and-true platforms like Google Display & Video 360, Magnite, and others. On top of that, you should seek out a private exchange to ensure access to premium inventory.4 points for publishers in search of the best programmatic platformYou work hard to make the best apps for your users, and you deserve to partner up with a programmatic video advertising platform that works hard too. Serving video ads that both keep users engaged and your profits rising can be a tricky needle to thread, but the right platform should make your part of the process simple and effective.A large selection of advertisersEncountering the same ads over and over again can get old fast — and diminish engagement. On top of that, a small selection of advertisers means fewer chances for your users to connect with an ad and convert — which means less revenue, too. The ideal programmatic video advertising platform will partner with thousands of advertisers to fill your placements with fresh, engaging content.Rewarded videos and offerwallsInterstitial video ads aren’t likely to disappear any time soon, but players strongly prefer other means of advertisement. In fact, 76% of US mobile gamers say they prefer rewarded videos over interstitial ads. Giving players the choice of when to watch ads, with the inducement of in-game rewards, can be very powerful — and an offerwall is another powerful way to put the ball in your player’s court.Easy supply-side SDK integrationThe time your developers spend integrating a new video programmatic advertising solution into your apps is time they could have spent making those apps more engaging for users. While any backend adjustment will naturally take some time to implement, your new programmatic partner should offer a powerful, industry-standard SDK to make the process fast and non-disruptive.Support for programmatic mediationMediators such as LevelPlay by ironSource automatically prioritize ad demand from multiple third-party networks, optimizing your cash flow and reducing work on your end. Your programmatic video advertising platform should seamlessly integrate with mediators to make the most of each ad placement, every time.Pick a powerful programmatic partnerThankfully, advertisers and publishers alike can choose one solution that checks all the above boxes and more. For advertisers, the ironSource Programmatic Marketplace will connect you with targeted audiences in thousands of apps that gel with your brand. For publishers, ironSource’s marketplace means a massive selection of ads that your users and your bottom line will love. #how #choose #programmatic #video #advertising
    UNITY.COM
    How to choose a programmatic video advertising platform: 8 considerations
    Whether you’re an advertiser or a publisher, partnering up with the right programmatic video advertising platform is one of the most important business decisions you can make. More than half of U.S. marketing budgets are now devoted to programmatically purchased media, and there’s no indication that trend will reverse any time soon.Everybody wants to find the solution that’s best for their bottom line. However, the specific considerations that should go into choosing the right video programmatic advertising solution differ depending on whether you have supply to sell or are looking for an audience for your advertisements. This article will break down key factors for both mobile advertisers and mobile publishers to keep in mind as they search for a programmatic video advertising platform.Before we get into the specifics on either end, let’s recap the basic concepts.What is a programmatic video advertising platform?A programmatic video advertising platform combines tools, processes, and marketplaces to place video ads from advertising partners in ad placements furnished by publishing partners. The “programmatic” part of the term means that it’s all done procedurally via automated tools, integrating with demand side platforms and supply side platforms to allow advertising placements to be bid upon, selected, and displayed in fractions of a second.If a mobile game has ever offered you extra rewards for watching a video and you found yourself watching an ad for a related game a split second later, you’ve likely been on the user side of an advertising programmatic transaction. Now let’s take a look at what considerations make for the ideal programmatic video advertising platform for the other two main parties involved.4 points to help advertisers choose the best programmatic platformLooking for the best way to leverage your video demand side platform? These are four key points for advertisers to consider when trying to find the right programmatic video advertising platform.A large, engaged audienceOne of the most important things a programmatic video advertising platform can do for advertisers is put their creative content in front of as many people as possible. However, it’s not enough to just pass your content in front of the most eyeballs. It’s equally important for the platform to give you access to engaged audiences who are more likely to convert so you can make the most of your advertising dollar.Full-screen videos to grab attentionYou need every advantage you can get when you’re grappling for the attention of a busy mobile user. Your video demand side platform should prioritize full-screen takeovers when and where they make sense, making sure your content isn’t just playing unnoticed on the far side of the screen.A range of ad options that are easy to testYour video programmatic advertising partner should be able to offer a broad variety of creative and placement options, including interstitial and rewarded ads. It should also enable you to test, iterate, and optimize ads as soon as they’re put into rotation, ensuring your ad spend is meeting your targets and allowing for fast and flexible changes if needed.Simple access to supplyEven the most powerful programmatic video advertising platform is no good if it’s impractical to get running. Look for partners that allows instant access to supply through tried-and-true platforms like Google Display & Video 360, Magnite, and others. On top of that, you should seek out a private exchange to ensure access to premium inventory.4 points for publishers in search of the best programmatic platformYou work hard to make the best apps for your users, and you deserve to partner up with a programmatic video advertising platform that works hard too. Serving video ads that both keep users engaged and your profits rising can be a tricky needle to thread, but the right platform should make your part of the process simple and effective.A large selection of advertisersEncountering the same ads over and over again can get old fast — and diminish engagement. On top of that, a small selection of advertisers means fewer chances for your users to connect with an ad and convert — which means less revenue, too. The ideal programmatic video advertising platform will partner with thousands of advertisers to fill your placements with fresh, engaging content.Rewarded videos and offerwallsInterstitial video ads aren’t likely to disappear any time soon, but players strongly prefer other means of advertisement. In fact, 76% of US mobile gamers say they prefer rewarded videos over interstitial ads. Giving players the choice of when to watch ads, with the inducement of in-game rewards, can be very powerful — and an offerwall is another powerful way to put the ball in your player’s court.Easy supply-side SDK integrationThe time your developers spend integrating a new video programmatic advertising solution into your apps is time they could have spent making those apps more engaging for users. While any backend adjustment will naturally take some time to implement, your new programmatic partner should offer a powerful, industry-standard SDK to make the process fast and non-disruptive.Support for programmatic mediationMediators such as LevelPlay by ironSource automatically prioritize ad demand from multiple third-party networks, optimizing your cash flow and reducing work on your end. Your programmatic video advertising platform should seamlessly integrate with mediators to make the most of each ad placement, every time.Pick a powerful programmatic partnerThankfully, advertisers and publishers alike can choose one solution that checks all the above boxes and more. For advertisers, the ironSource Programmatic Marketplace will connect you with targeted audiences in thousands of apps that gel with your brand. For publishers, ironSource’s marketplace means a massive selection of ads that your users and your bottom line will love.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Sienna Net-Zero Home / billionBricks

    Sienna Net-Zero Home / billionBricksSave this picture!© Ron Mendoza , Mark Twain C , BB teamHouses, Sustainability•Quezon City, Philippines

    Architects:
    billionBricks
    Area
    Area of this architecture project

    Area: 
    45 m²

    Year
    Completion year of this architecture project

    Year: 

    2024

    Photographs

    Photographs:Ron Mendoza , Mark Twain C , BB teamMore SpecsLess Specs
    this picture!
    Text description provided by the architects. Built to address homelessness and climate change, the Sienna Net-Zero Home is a self-sustaining, solar-powered, cost-efficient, and compact housing solution. This climate-responsive and affordable home, located in Quezon City, Philippines, represents a revolutionary vision for social housing through its integration of thoughtful design, sustainability, and energy self-sufficiency.this picture!this picture!this picture!Designed with the unique tropical climate of the Philippines in mind, the Sienna Home prioritizes natural ventilation, passive cooling, and rainwater management to enhance indoor comfort and reduce reliance on artificial cooling systems. The compact 4.5m x 5.1m floor plan has been meticulously optimized for functionality, offering a flexible layout that grows and adapts to the families living in them.this picture!this picture!this picture!A key architectural feature is BillionBricks' innovative Powershade technology - an advanced solar roofing system that serves multiple purposes. Beyond generating clean, renewable energy, it acts as a protective heat barrier, reducing indoor temperatures and improving thermal comfort. Unlike conventional solar panels, Powershade seamlessly integrates with the home's structure, providing reliable energy generation while doubling as a durable roof. This makes the Sienna Home energy-positive, meaning it produces more electricity than it consumes, lowering utility costs and promoting long-term energy independence. Excess power can also be stored or sold back to the grid, creating an additional financial benefit for homeowners.this picture!When multiple Sienna Homes are built together, the innovative PowerShade roofing solution transcends its role as an individual energy source and transforms into a utility-scale solar rooftop farm, capable of powering essential community facilities and generating additional income. This shared energy infrastructure fosters a sense of collective empowerment, enabling residents to actively participate in a sustainable and financially rewarding energy ecosystem.this picture!this picture!The Sienna Home is built using lightweight prefabricated components, allowing for rapid on-site assembly while maintaining durability and structural integrity. This modular approach enables scalability, making it an ideal prototype for large-scale, cost-effective housing developments. The design also allows for future expansions, giving homeowners the flexibility to adapt their living spaces over time.this picture!Adhering to BP 220 social housing regulations, the unit features a 3-meter front setback and a 2-meter rear setback, ensuring proper ventilation, safety, and community-friendly spaces. Additionally, corner units include a 1.5-meter offset, enhancing privacy and accessibility within neighborhood layouts. Beyond providing a single-family residence, the Sienna House is designed to function within a larger sustainable community model, integrating shared green spaces, pedestrian pathways, and decentralized utilities. By promoting energy independence and environmental resilience, the project sets a new precedent for affordable yet high-quality housing solutions in rapidly urbanizing regions.this picture!The Sienna Home in Quezon City serves as a blueprint for future developments, proving that low-cost housing can be both architecturally compelling and socially transformative. By rethinking traditional housing models, BillionBricks is pioneering a future where affordability and sustainability are seamlessly integrated.

    Project gallerySee allShow less
    About this officebillionBricksOffice•••
    Published on June 15, 2025Cite: "Sienna Net-Zero Home / billionBricks" 14 Jun 2025. ArchDaily. Accessed . < ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否
    You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    #sienna #netzero #home #billionbricks
    Sienna Net-Zero Home / billionBricks
    Sienna Net-Zero Home / billionBricksSave this picture!© Ron Mendoza , Mark Twain C , BB teamHouses, Sustainability•Quezon City, Philippines Architects: billionBricks Area Area of this architecture project Area:  45 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Ron Mendoza , Mark Twain C , BB teamMore SpecsLess Specs this picture! Text description provided by the architects. Built to address homelessness and climate change, the Sienna Net-Zero Home is a self-sustaining, solar-powered, cost-efficient, and compact housing solution. This climate-responsive and affordable home, located in Quezon City, Philippines, represents a revolutionary vision for social housing through its integration of thoughtful design, sustainability, and energy self-sufficiency.this picture!this picture!this picture!Designed with the unique tropical climate of the Philippines in mind, the Sienna Home prioritizes natural ventilation, passive cooling, and rainwater management to enhance indoor comfort and reduce reliance on artificial cooling systems. The compact 4.5m x 5.1m floor plan has been meticulously optimized for functionality, offering a flexible layout that grows and adapts to the families living in them.this picture!this picture!this picture!A key architectural feature is BillionBricks' innovative Powershade technology - an advanced solar roofing system that serves multiple purposes. Beyond generating clean, renewable energy, it acts as a protective heat barrier, reducing indoor temperatures and improving thermal comfort. Unlike conventional solar panels, Powershade seamlessly integrates with the home's structure, providing reliable energy generation while doubling as a durable roof. This makes the Sienna Home energy-positive, meaning it produces more electricity than it consumes, lowering utility costs and promoting long-term energy independence. Excess power can also be stored or sold back to the grid, creating an additional financial benefit for homeowners.this picture!When multiple Sienna Homes are built together, the innovative PowerShade roofing solution transcends its role as an individual energy source and transforms into a utility-scale solar rooftop farm, capable of powering essential community facilities and generating additional income. This shared energy infrastructure fosters a sense of collective empowerment, enabling residents to actively participate in a sustainable and financially rewarding energy ecosystem.this picture!this picture!The Sienna Home is built using lightweight prefabricated components, allowing for rapid on-site assembly while maintaining durability and structural integrity. This modular approach enables scalability, making it an ideal prototype for large-scale, cost-effective housing developments. The design also allows for future expansions, giving homeowners the flexibility to adapt their living spaces over time.this picture!Adhering to BP 220 social housing regulations, the unit features a 3-meter front setback and a 2-meter rear setback, ensuring proper ventilation, safety, and community-friendly spaces. Additionally, corner units include a 1.5-meter offset, enhancing privacy and accessibility within neighborhood layouts. Beyond providing a single-family residence, the Sienna House is designed to function within a larger sustainable community model, integrating shared green spaces, pedestrian pathways, and decentralized utilities. By promoting energy independence and environmental resilience, the project sets a new precedent for affordable yet high-quality housing solutions in rapidly urbanizing regions.this picture!The Sienna Home in Quezon City serves as a blueprint for future developments, proving that low-cost housing can be both architecturally compelling and socially transformative. By rethinking traditional housing models, BillionBricks is pioneering a future where affordability and sustainability are seamlessly integrated. Project gallerySee allShow less About this officebillionBricksOffice••• Published on June 15, 2025Cite: "Sienna Net-Zero Home / billionBricks" 14 Jun 2025. ArchDaily. Accessed . < ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream #sienna #netzero #home #billionbricks
    WWW.ARCHDAILY.COM
    Sienna Net-Zero Home / billionBricks
    Sienna Net-Zero Home / billionBricksSave this picture!© Ron Mendoza , Mark Twain C , BB teamHouses, Sustainability•Quezon City, Philippines Architects: billionBricks Area Area of this architecture project Area:  45 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Ron Mendoza , Mark Twain C , BB teamMore SpecsLess Specs Save this picture! Text description provided by the architects. Built to address homelessness and climate change, the Sienna Net-Zero Home is a self-sustaining, solar-powered, cost-efficient, and compact housing solution. This climate-responsive and affordable home, located in Quezon City, Philippines, represents a revolutionary vision for social housing through its integration of thoughtful design, sustainability, and energy self-sufficiency.Save this picture!Save this picture!Save this picture!Designed with the unique tropical climate of the Philippines in mind, the Sienna Home prioritizes natural ventilation, passive cooling, and rainwater management to enhance indoor comfort and reduce reliance on artificial cooling systems. The compact 4.5m x 5.1m floor plan has been meticulously optimized for functionality, offering a flexible layout that grows and adapts to the families living in them.Save this picture!Save this picture!Save this picture!A key architectural feature is BillionBricks' innovative Powershade technology - an advanced solar roofing system that serves multiple purposes. Beyond generating clean, renewable energy, it acts as a protective heat barrier, reducing indoor temperatures and improving thermal comfort. Unlike conventional solar panels, Powershade seamlessly integrates with the home's structure, providing reliable energy generation while doubling as a durable roof. This makes the Sienna Home energy-positive, meaning it produces more electricity than it consumes, lowering utility costs and promoting long-term energy independence. Excess power can also be stored or sold back to the grid, creating an additional financial benefit for homeowners.Save this picture!When multiple Sienna Homes are built together, the innovative PowerShade roofing solution transcends its role as an individual energy source and transforms into a utility-scale solar rooftop farm, capable of powering essential community facilities and generating additional income. This shared energy infrastructure fosters a sense of collective empowerment, enabling residents to actively participate in a sustainable and financially rewarding energy ecosystem.Save this picture!Save this picture!The Sienna Home is built using lightweight prefabricated components, allowing for rapid on-site assembly while maintaining durability and structural integrity. This modular approach enables scalability, making it an ideal prototype for large-scale, cost-effective housing developments. The design also allows for future expansions, giving homeowners the flexibility to adapt their living spaces over time.Save this picture!Adhering to BP 220 social housing regulations, the unit features a 3-meter front setback and a 2-meter rear setback, ensuring proper ventilation, safety, and community-friendly spaces. Additionally, corner units include a 1.5-meter offset, enhancing privacy and accessibility within neighborhood layouts. Beyond providing a single-family residence, the Sienna House is designed to function within a larger sustainable community model, integrating shared green spaces, pedestrian pathways, and decentralized utilities. By promoting energy independence and environmental resilience, the project sets a new precedent for affordable yet high-quality housing solutions in rapidly urbanizing regions.Save this picture!The Sienna Home in Quezon City serves as a blueprint for future developments, proving that low-cost housing can be both architecturally compelling and socially transformative. By rethinking traditional housing models, BillionBricks is pioneering a future where affordability and sustainability are seamlessly integrated. Project gallerySee allShow less About this officebillionBricksOffice••• Published on June 15, 2025Cite: "Sienna Net-Zero Home / billionBricks" 14 Jun 2025. ArchDaily. Accessed . <https://www.archdaily.com/1031072/sienna-billionbricks&gt ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    0 Commentarii 0 Distribuiri 0 previzualizare
  • VRS - The Must-Know Material Optimization Trick in UE5

    UE5 lets you optimize materials using Variable Rate Shading. It runs your pixel shader once for a block, then spreads that result — reducing pixel shader cost with minimal visual loss.
    This feature is not related to LOD, Nanite or Virtual Shadow Maps - it’s a separate material level optimization.

    In practice, you might see 10–50% improvement on fill-rate-bound scenes, sometimes more.
    Works per-material, no global setup needed.
    Available since UE5.0Open your material — Details — Advanced — Shading Rate— Change

    The larger the block, the less GPU load, but also more visible blur
    Great to be used on background VFX, or anything that doesn’t need crisp pixel detail.

    Doesn’t work with DLSS or on all platforms.
    Not supported for masked materials in UE5.6. May be broken with masked materials in earlier versions.

    I learned about this feature thanks to a super-duper-senior-boss-chief-king tech art wizard — Dmytro Baidachnyi. All hype to him.
    Cheers
    #vrs #mustknow #material #optimization #trick
    VRS - The Must-Know Material Optimization Trick in UE5🤖
    UE5 lets you optimize materials using Variable Rate Shading. It runs your pixel shader once for a block, then spreads that result — reducing pixel shader cost with minimal visual loss. This feature is not related to LOD, Nanite or Virtual Shadow Maps - it’s a separate material level optimization. In practice, you might see 10–50% improvement on fill-rate-bound scenes, sometimes more. Works per-material, no global setup needed. Available since UE5.0Open your material — Details — Advanced — Shading Rate— Change The larger the block, the less GPU load, but also more visible blur Great to be used on background VFX, or anything that doesn’t need crisp pixel detail. Doesn’t work with DLSS or on all platforms. Not supported for masked materials in UE5.6. May be broken with masked materials in earlier versions. I learned about this feature thanks to a super-duper-senior-boss-chief-king tech art wizard — Dmytro Baidachnyi. All hype to him. Cheers #vrs #mustknow #material #optimization #trick
    REALTIMEVFX.COM
    VRS - The Must-Know Material Optimization Trick in UE5🤖
    UE5 lets you optimize materials using Variable Rate Shading (VRS). It runs your pixel shader once for a block (like 2x2 or 4x4), then spreads that result — reducing pixel shader cost with minimal visual loss. This feature is not related to LOD, Nanite or Virtual Shadow Maps - it’s a separate material level optimization. In practice, you might see 10–50% improvement on fill-rate-bound scenes, sometimes more (often less). Works per-material, no global setup needed. Available since UE5.0 (DX12/Vulkan required) Open your material — Details — Advanced — Shading Rate (1x1) — Change The larger the block, the less GPU load, but also more visible blur Great to be used on background VFX, or anything that doesn’t need crisp pixel detail. Doesn’t work with DLSS or on all platforms. Not supported for masked materials in UE5.6. May be broken with masked materials in earlier versions. I learned about this feature thanks to a super-duper-senior-boss-chief-king tech art wizard — Dmytro Baidachnyi. All hype to him. Cheers
    0 Commentarii 0 Distribuiri 0 previzualizare
  • UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge

    Researchers from the US-based University of Massachusetts Amherst, in collaboration with the Massachusetts Institute of TechnologyDepartment of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques.
    “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added.
    The pilot project is also a collaboration with the Massachusetts Department of Transportation, the Massachusetts Technology Collaborative, the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration.
    Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst.
    Tackling America’s Bridge Crisis with Cold Spray Technology
    Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed billion—well beyond current funding levels. 
    The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge.
    One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us toon this actual bridge while cars are going.”
    To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan.
    Next steps: Testing Cold-Sprayed Repairs
    The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests.
    “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.”
    3D Printing for Infrastructure Repairs
    Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College Londonhave developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained.
    Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters.
    Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now.
    Who won the2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news.
    You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.
    Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst.
    #umass #mit #test #cold #spray
    UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge
    Researchers from the US-based University of Massachusetts Amherst, in collaboration with the Massachusetts Institute of TechnologyDepartment of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques. “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added. The pilot project is also a collaboration with the Massachusetts Department of Transportation, the Massachusetts Technology Collaborative, the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration. Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst. Tackling America’s Bridge Crisis with Cold Spray Technology Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed billion—well beyond current funding levels.  The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge. One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us toon this actual bridge while cars are going.” To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan. Next steps: Testing Cold-Sprayed Repairs The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests. “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.” 3D Printing for Infrastructure Repairs Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College Londonhave developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained. Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters. Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst. #umass #mit #test #cold #spray
    3DPRINTINGINDUSTRY.COM
    UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge
    Researchers from the US-based University of Massachusetts Amherst (UMass), in collaboration with the Massachusetts Institute of Technology (MIT) Department of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques. “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added. The pilot project is also a collaboration with the Massachusetts Department of Transportation (MassDOT), the Massachusetts Technology Collaborative (MassTech), the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration. Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis (left, standing). Photo via UMass Amherst. Tackling America’s Bridge Crisis with Cold Spray Technology Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed $190 billion—well beyond current funding levels.  The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge. One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us to [apply the technique] on this actual bridge while cars are going [across].” To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan. Next steps: Testing Cold-Sprayed Repairs The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests. “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.” 3D Printing for Infrastructure Repairs Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College London (UCL) have developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained. Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters. Join our Additive Manufacturing Advantage (AMAA) event on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis (left, standing). Photo via UMass Amherst.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Op-ed: Canada’s leadership in solar air heating—Innovation and flagship projects

    Solar air heating is among the most cost-effective applications of solar thermal energy. These systems are used for space heating and preheating fresh air for ventilation, typically using glazed or unglazed perforated solar collectors. The collectors draw in outside air, heat it using solar energy, and then distribute it through ductwork to meet building heating and fresh air needs. In 2024, Canada led again the world for the at least seventh year in a row in solar air heating adoption. The four key suppliers – Trigo Energies, Conserval Engineering, Matrix Energy, and Aéronergie – reported a combined 26,203 m2of collector area sold last year. Several of these providers are optimistic about the growing demand. These findings come from the newly released Canadian Solar Thermal Market Survey 2024, commissioned by Natural Resources Canada.
    Canada is the global leader in solar air heating. The market is driven by a strong network of experienced system suppliers, optimized technologies, and a few small favorable funding programs – especially in the province of Quebec. Architects and developers are increasingly turning to these cost-effective, façade-integrated systems as a practical solution for reducing onsite natural gas consumption.
    Despite its cold climate, Canada benefits from strong solar potential with solar irradiance in many areas rivaling or even exceeding that of parts of Europe. This makes solar air heating not only viable, but especially valuable in buildings with high fresh air requirements including schools, hospitals, and offices. The projects highlighted in this article showcase the versatility and relevance of solar air heating across a range of building types, from new constructions to retrofits.
    Figure 1: Preheating air for industrial buildings: 2,750 m2of Calento SL solar air collectors cover all south-west and south-east facing facades of the FAB3R factory in Trois-Rivières, Quebec. The hourly unitary flow rate is set at 41 m3/m2 or 2.23 cfm/ft2 of collector area, at the lower range because only a limited number of intake fans was close enough to the solar façade to avoid long ventilation ductwork. Photo: Trigo Energies
    Quebec’s solar air heating boom: the Trigo Energies story
    Trigo Energies makes almost 90 per cent of its sales in Quebec. “We profit from great subsidies, as solar air systems are supported by several organizations in our province – the electricity utility Hydro Quebec, the gas utility Energir and the Ministry of Natural Resources,” explained Christian Vachon, Vice President Technologies and R&D at Trigo Energies.
    Trigo Energies currently has nine employees directly involved in planning, engineering and installing solar air heating systems and teams up with several partner contractors to install mostly retrofit projects. “A high degree of engineering is required to fit a solar heating system into an existing factory,” emphasized Vachon. “Knowledge about HVAC engineering is as important as experience with solar thermal and architecture.”
    One recent Trigo installation is at the FAB3R factory in Trois-Rivières. FAB3R specializes in manufacturing, repairing, and refurbishing large industrial equipment. Its air heating and ventilation system needed urgent renovation because of leakages and discomfort for the workers. “Due to many positive references he had from industries in the area, the owner of FAB3R contacted us,” explained Vachon. “The existence of subsidies helped the client to go for a retrofitting project including solar façade at once instead of fixing the problems one bit at a time.” Approximately 50 per cent of the investment costs for both the solar air heating and the renovation of the indoor ventilation system were covered by grants and subsidies. FAB3R profited from an Energir grant targeted at solar preheating, plus an investment subsidy from the Government of Quebec’s EcoPerformance Programme.
     
    Blue or black, but always efficient: the advanced absorber coating
    In October 2024, the majority of the new 2,750 m²solar façade at FAB3R began operation. According to Vachon, the system is expected to cover approximately 13 per cent of the factory’s annual heating demand, which is otherwise met by natural gas. Trigo Energies equipped the façade with its high-performance Calento SL collectors, featuring a notable innovation: a selective, low-emissivity coating that withstands outdoor conditions. Introduced by Trigo in 2019 and manufactured by Almeco Group from Italy, this advanced coating is engineered to maximize solar absorption while minimizing heat loss via infrared emission, enhancing the overall efficiency of the system.
    The high efficiency coating is now standard in Trigo’s air heating systems. According to the manufacturer, the improved collector design shows a 25 to 35 per cent increase in yield over the former generation of solar air collectors with black paint. Testing conducted at Queen’s University confirms this performance advantage. Researchers measured the performance of transpired solar air collectors both with and without a selective coating, mounted side-by-side on a south-facing vertical wall. The results showed that the collectors with the selective coating produced 1.3 to 1.5 times more energy than those without it. In 2024, the monitoring results were jointly published by Queen’s University and Canmat Energy in a paper titled Performance Comparison of a Transpired Air Solar Collector with Low-E Surface Coating.
    Selective coating, also used on other solar thermal technologies including glazed flat plate or vacuum tube collectors, has a distinctive blue color. Trigo customers can, however, choose between blue and black finishes. “By going from the normal blue selective coating to black selective coating, which Almeco is specially producing for Trigo, we lose about 1 per cent in solar efficiency,” explained Vachon.
    Figure 2: Building-integrated solar air heating façade with MatrixAir collectors at the firehall building in Mont Saint Hilaire, south of Montreal. The 190 m2south-facing wall preheats the fresh air, reducing natural gas consumption by 18 per cent compared to the conventional make-up system. Architect: Leclerc Architecture. Photo: Matrix Energy
    Matrix Energy: collaborating with architects and engineers in new builds
    The key target customer group of Matrix Energy are public buildings – mainly new construction. “Since the pandemic, schools are more conscious about fresh air, and solar preheating of the incoming fresh air has a positive impact over the entire school year,” noted Brian Wilkinson, President of Matrix Energy.
    Matrix Energy supplies systems across Canada, working with local partners to source and process the metal sheets used in their MatrixAir collectors. These metal sheets are perforated and then formed into architectural cladding profiles. The company exclusively offers unglazed, single-stage collectors, citing fire safety concerns associated with polymeric covers.
    “We have strong relationships with many architects and engineers who appreciate the simplicity and cost-effectiveness of transpired solar air heating systems,” said President Brian Wilkinson, describing the company’s sales approach. “Matrix handles system design and supplies the necessary materials, while installation is carried out by specialized cladding and HVAC contractors overseen by on-site architects and engineers,” Wilkinson added.
    Finding the right flow: the importance of unitary airflow rates
    One of the key design factors in solar air heating systems is the amount of air that passes through each square meter of the perforated metal absorber,  known as the unitary airflow rate. The principle is straightforward: higher airflow rates deliver more total heat to the building, while lower flow rates result in higher outlet air temperatures. Striking the right balance between air volume and temperature gain is essential for efficient system performance.
    For unglazed collectors mounted on building façades, typical hourly flow rates should range between 120 and 170, or 6.6 to 9.4 cfm/ft2. However, Wilkinson suggests that an hourly airflow rate of around 130 m³/h/m²offers the best cost-benefit balance for building owners. If the airflow is lower, the system will deliver higher air temperatures, but it would then need a much larger collector area to achieve the same air volume and optimum performance, he explained.
    It’s also crucial for the flow rate to overcome external wind pressure. As wind passes over the absorber, air flow through the collector’s perforations is reduced, resulting in heat losses to the environment. This effect becomes even more pronounced in taller buildings, where wind exposure is greater. To ensure the system performs well even in these conditions, higher hourly airflow rates typically between 150 and 170 m³/m² are necessary.
    Figure 3: One of three apartment blocks of the Maple House in Toronto’s Canary District. Around 160 m2of SolarWall collectors clad the two-storey mechanical penthouse on the roof. The rental flats have been occupied since the beginning of 2024. Collaborators: architects-Alliance, Claude Cormier et Associés, Thornton Tomasetti, RWDI, Cole Engineering, DesignAgency, MVShore, BA Group, EllisDon. Photo: Conserval Engineering
    Solar air heating systems support LEED-certified building designs
    Solar air collectors are also well-suited for use in multi-unit residential buildings. A prime example is the Canary District in Toronto, where single-stage SolarWall collectors from Conserval Engineering have been installed on several MURBs to clad the mechanical penthouses. “These penthouses are an ideal location for our air heating collectors, as they contain the make-up air units that supply corridor ventilation throughout the building,” explained Victoria Hollick, Vice President of Conserval Engineering. “The walls are typically finished with metal façades, which can be seamlessly replaced with a SolarWall system – maintaining the architectural language without disruption.” To date, nine solar air heating systems have been commissioned in the Canary District, covering a total collector area of over 1,000 m².
    “Our customers have many motivations to integrate SolarWall technology into their new construction or retrofit projects, either carbon reduction, ESG, or green building certification targets,” explained Hollick.
    The use of solar air collectors in the Canary District was proposed by architects from the Danish firm Cobe. The black-colored SolarWall system preheats incoming air before it is distributed to the building’s corridors and common areas, reducing reliance on natural gas heating and supporting the pursuit of LEED Gold certification. Hollick estimates the amount of gas saved between 10 to 20 per cent of the total heating load for the corridor ventilation of the multi-unit residential buildings. Additional energy-saving strategies include a 50/50 window-to-wall ratio with high-performance glazing, green roofs, high-efficiency mechanical systems, LED lighting, and Energy Star-certified appliances.
    The ideal orientation for a SolarWall system is due south. However, the systems can be built at any orientation up to 90° east and west, explained Hollick. A SolarWall at 90° would have approximately 60 per cent of the energy production of the same area facing south.Canada’s expertise in solar air heating continues to set a global benchmark, driven by supporting R&D, by innovative technologies, strategic partnerships, and a growing portfolio of high-impact projects. With strong policy support and proven performance, solar air heating is poised to play a key role in the country’s energy-efficient building future.
    Figure 4: Claude-Bechard Building in Quebec is a showcase project for sustainable architecture with a 72 m2Lubi solar air heating wall from Aéronergie. It serves as a regional administrative center. Architectural firm: Goulet et Lebel Architectes. Photo: Art Massif

    Bärbel Epp is the general manager of the German Agency solrico, whose focus is on solar market research and international communication.
    The post Op-ed: Canada’s leadership in solar air heating—Innovation and flagship projects appeared first on Canadian Architect.
    #oped #canadas #leadership #solar #air
    Op-ed: Canada’s leadership in solar air heating—Innovation and flagship projects
    Solar air heating is among the most cost-effective applications of solar thermal energy. These systems are used for space heating and preheating fresh air for ventilation, typically using glazed or unglazed perforated solar collectors. The collectors draw in outside air, heat it using solar energy, and then distribute it through ductwork to meet building heating and fresh air needs. In 2024, Canada led again the world for the at least seventh year in a row in solar air heating adoption. The four key suppliers – Trigo Energies, Conserval Engineering, Matrix Energy, and Aéronergie – reported a combined 26,203 m2of collector area sold last year. Several of these providers are optimistic about the growing demand. These findings come from the newly released Canadian Solar Thermal Market Survey 2024, commissioned by Natural Resources Canada. Canada is the global leader in solar air heating. The market is driven by a strong network of experienced system suppliers, optimized technologies, and a few small favorable funding programs – especially in the province of Quebec. Architects and developers are increasingly turning to these cost-effective, façade-integrated systems as a practical solution for reducing onsite natural gas consumption. Despite its cold climate, Canada benefits from strong solar potential with solar irradiance in many areas rivaling or even exceeding that of parts of Europe. This makes solar air heating not only viable, but especially valuable in buildings with high fresh air requirements including schools, hospitals, and offices. The projects highlighted in this article showcase the versatility and relevance of solar air heating across a range of building types, from new constructions to retrofits. Figure 1: Preheating air for industrial buildings: 2,750 m2of Calento SL solar air collectors cover all south-west and south-east facing facades of the FAB3R factory in Trois-Rivières, Quebec. The hourly unitary flow rate is set at 41 m3/m2 or 2.23 cfm/ft2 of collector area, at the lower range because only a limited number of intake fans was close enough to the solar façade to avoid long ventilation ductwork. Photo: Trigo Energies Quebec’s solar air heating boom: the Trigo Energies story Trigo Energies makes almost 90 per cent of its sales in Quebec. “We profit from great subsidies, as solar air systems are supported by several organizations in our province – the electricity utility Hydro Quebec, the gas utility Energir and the Ministry of Natural Resources,” explained Christian Vachon, Vice President Technologies and R&D at Trigo Energies. Trigo Energies currently has nine employees directly involved in planning, engineering and installing solar air heating systems and teams up with several partner contractors to install mostly retrofit projects. “A high degree of engineering is required to fit a solar heating system into an existing factory,” emphasized Vachon. “Knowledge about HVAC engineering is as important as experience with solar thermal and architecture.” One recent Trigo installation is at the FAB3R factory in Trois-Rivières. FAB3R specializes in manufacturing, repairing, and refurbishing large industrial equipment. Its air heating and ventilation system needed urgent renovation because of leakages and discomfort for the workers. “Due to many positive references he had from industries in the area, the owner of FAB3R contacted us,” explained Vachon. “The existence of subsidies helped the client to go for a retrofitting project including solar façade at once instead of fixing the problems one bit at a time.” Approximately 50 per cent of the investment costs for both the solar air heating and the renovation of the indoor ventilation system were covered by grants and subsidies. FAB3R profited from an Energir grant targeted at solar preheating, plus an investment subsidy from the Government of Quebec’s EcoPerformance Programme.   Blue or black, but always efficient: the advanced absorber coating In October 2024, the majority of the new 2,750 m²solar façade at FAB3R began operation. According to Vachon, the system is expected to cover approximately 13 per cent of the factory’s annual heating demand, which is otherwise met by natural gas. Trigo Energies equipped the façade with its high-performance Calento SL collectors, featuring a notable innovation: a selective, low-emissivity coating that withstands outdoor conditions. Introduced by Trigo in 2019 and manufactured by Almeco Group from Italy, this advanced coating is engineered to maximize solar absorption while minimizing heat loss via infrared emission, enhancing the overall efficiency of the system. The high efficiency coating is now standard in Trigo’s air heating systems. According to the manufacturer, the improved collector design shows a 25 to 35 per cent increase in yield over the former generation of solar air collectors with black paint. Testing conducted at Queen’s University confirms this performance advantage. Researchers measured the performance of transpired solar air collectors both with and without a selective coating, mounted side-by-side on a south-facing vertical wall. The results showed that the collectors with the selective coating produced 1.3 to 1.5 times more energy than those without it. In 2024, the monitoring results were jointly published by Queen’s University and Canmat Energy in a paper titled Performance Comparison of a Transpired Air Solar Collector with Low-E Surface Coating. Selective coating, also used on other solar thermal technologies including glazed flat plate or vacuum tube collectors, has a distinctive blue color. Trigo customers can, however, choose between blue and black finishes. “By going from the normal blue selective coating to black selective coating, which Almeco is specially producing for Trigo, we lose about 1 per cent in solar efficiency,” explained Vachon. Figure 2: Building-integrated solar air heating façade with MatrixAir collectors at the firehall building in Mont Saint Hilaire, south of Montreal. The 190 m2south-facing wall preheats the fresh air, reducing natural gas consumption by 18 per cent compared to the conventional make-up system. Architect: Leclerc Architecture. Photo: Matrix Energy Matrix Energy: collaborating with architects and engineers in new builds The key target customer group of Matrix Energy are public buildings – mainly new construction. “Since the pandemic, schools are more conscious about fresh air, and solar preheating of the incoming fresh air has a positive impact over the entire school year,” noted Brian Wilkinson, President of Matrix Energy. Matrix Energy supplies systems across Canada, working with local partners to source and process the metal sheets used in their MatrixAir collectors. These metal sheets are perforated and then formed into architectural cladding profiles. The company exclusively offers unglazed, single-stage collectors, citing fire safety concerns associated with polymeric covers. “We have strong relationships with many architects and engineers who appreciate the simplicity and cost-effectiveness of transpired solar air heating systems,” said President Brian Wilkinson, describing the company’s sales approach. “Matrix handles system design and supplies the necessary materials, while installation is carried out by specialized cladding and HVAC contractors overseen by on-site architects and engineers,” Wilkinson added. Finding the right flow: the importance of unitary airflow rates One of the key design factors in solar air heating systems is the amount of air that passes through each square meter of the perforated metal absorber,  known as the unitary airflow rate. The principle is straightforward: higher airflow rates deliver more total heat to the building, while lower flow rates result in higher outlet air temperatures. Striking the right balance between air volume and temperature gain is essential for efficient system performance. For unglazed collectors mounted on building façades, typical hourly flow rates should range between 120 and 170, or 6.6 to 9.4 cfm/ft2. However, Wilkinson suggests that an hourly airflow rate of around 130 m³/h/m²offers the best cost-benefit balance for building owners. If the airflow is lower, the system will deliver higher air temperatures, but it would then need a much larger collector area to achieve the same air volume and optimum performance, he explained. It’s also crucial for the flow rate to overcome external wind pressure. As wind passes over the absorber, air flow through the collector’s perforations is reduced, resulting in heat losses to the environment. This effect becomes even more pronounced in taller buildings, where wind exposure is greater. To ensure the system performs well even in these conditions, higher hourly airflow rates typically between 150 and 170 m³/m² are necessary. Figure 3: One of three apartment blocks of the Maple House in Toronto’s Canary District. Around 160 m2of SolarWall collectors clad the two-storey mechanical penthouse on the roof. The rental flats have been occupied since the beginning of 2024. Collaborators: architects-Alliance, Claude Cormier et Associés, Thornton Tomasetti, RWDI, Cole Engineering, DesignAgency, MVShore, BA Group, EllisDon. Photo: Conserval Engineering Solar air heating systems support LEED-certified building designs Solar air collectors are also well-suited for use in multi-unit residential buildings. A prime example is the Canary District in Toronto, where single-stage SolarWall collectors from Conserval Engineering have been installed on several MURBs to clad the mechanical penthouses. “These penthouses are an ideal location for our air heating collectors, as they contain the make-up air units that supply corridor ventilation throughout the building,” explained Victoria Hollick, Vice President of Conserval Engineering. “The walls are typically finished with metal façades, which can be seamlessly replaced with a SolarWall system – maintaining the architectural language without disruption.” To date, nine solar air heating systems have been commissioned in the Canary District, covering a total collector area of over 1,000 m². “Our customers have many motivations to integrate SolarWall technology into their new construction or retrofit projects, either carbon reduction, ESG, or green building certification targets,” explained Hollick. The use of solar air collectors in the Canary District was proposed by architects from the Danish firm Cobe. The black-colored SolarWall system preheats incoming air before it is distributed to the building’s corridors and common areas, reducing reliance on natural gas heating and supporting the pursuit of LEED Gold certification. Hollick estimates the amount of gas saved between 10 to 20 per cent of the total heating load for the corridor ventilation of the multi-unit residential buildings. Additional energy-saving strategies include a 50/50 window-to-wall ratio with high-performance glazing, green roofs, high-efficiency mechanical systems, LED lighting, and Energy Star-certified appliances. The ideal orientation for a SolarWall system is due south. However, the systems can be built at any orientation up to 90° east and west, explained Hollick. A SolarWall at 90° would have approximately 60 per cent of the energy production of the same area facing south.Canada’s expertise in solar air heating continues to set a global benchmark, driven by supporting R&D, by innovative technologies, strategic partnerships, and a growing portfolio of high-impact projects. With strong policy support and proven performance, solar air heating is poised to play a key role in the country’s energy-efficient building future. Figure 4: Claude-Bechard Building in Quebec is a showcase project for sustainable architecture with a 72 m2Lubi solar air heating wall from Aéronergie. It serves as a regional administrative center. Architectural firm: Goulet et Lebel Architectes. Photo: Art Massif Bärbel Epp is the general manager of the German Agency solrico, whose focus is on solar market research and international communication. The post Op-ed: Canada’s leadership in solar air heating—Innovation and flagship projects appeared first on Canadian Architect. #oped #canadas #leadership #solar #air
    WWW.CANADIANARCHITECT.COM
    Op-ed: Canada’s leadership in solar air heating—Innovation and flagship projects
    Solar air heating is among the most cost-effective applications of solar thermal energy. These systems are used for space heating and preheating fresh air for ventilation, typically using glazed or unglazed perforated solar collectors. The collectors draw in outside air, heat it using solar energy, and then distribute it through ductwork to meet building heating and fresh air needs. In 2024, Canada led again the world for the at least seventh year in a row in solar air heating adoption. The four key suppliers – Trigo Energies, Conserval Engineering, Matrix Energy, and Aéronergie – reported a combined 26,203 m2 (282,046 ft2) of collector area sold last year. Several of these providers are optimistic about the growing demand. These findings come from the newly released Canadian Solar Thermal Market Survey 2024, commissioned by Natural Resources Canada. Canada is the global leader in solar air heating. The market is driven by a strong network of experienced system suppliers, optimized technologies, and a few small favorable funding programs – especially in the province of Quebec. Architects and developers are increasingly turning to these cost-effective, façade-integrated systems as a practical solution for reducing onsite natural gas consumption. Despite its cold climate, Canada benefits from strong solar potential with solar irradiance in many areas rivaling or even exceeding that of parts of Europe. This makes solar air heating not only viable, but especially valuable in buildings with high fresh air requirements including schools, hospitals, and offices. The projects highlighted in this article showcase the versatility and relevance of solar air heating across a range of building types, from new constructions to retrofits. Figure 1: Preheating air for industrial buildings: 2,750 m2 (29,600 ft2) of Calento SL solar air collectors cover all south-west and south-east facing facades of the FAB3R factory in Trois-Rivières, Quebec. The hourly unitary flow rate is set at 41 m3/m2 or 2.23 cfm/ft2 of collector area, at the lower range because only a limited number of intake fans was close enough to the solar façade to avoid long ventilation ductwork. Photo: Trigo Energies Quebec’s solar air heating boom: the Trigo Energies story Trigo Energies makes almost 90 per cent of its sales in Quebec. “We profit from great subsidies, as solar air systems are supported by several organizations in our province – the electricity utility Hydro Quebec, the gas utility Energir and the Ministry of Natural Resources,” explained Christian Vachon, Vice President Technologies and R&D at Trigo Energies. Trigo Energies currently has nine employees directly involved in planning, engineering and installing solar air heating systems and teams up with several partner contractors to install mostly retrofit projects. “A high degree of engineering is required to fit a solar heating system into an existing factory,” emphasized Vachon. “Knowledge about HVAC engineering is as important as experience with solar thermal and architecture.” One recent Trigo installation is at the FAB3R factory in Trois-Rivières. FAB3R specializes in manufacturing, repairing, and refurbishing large industrial equipment. Its air heating and ventilation system needed urgent renovation because of leakages and discomfort for the workers. “Due to many positive references he had from industries in the area, the owner of FAB3R contacted us,” explained Vachon. “The existence of subsidies helped the client to go for a retrofitting project including solar façade at once instead of fixing the problems one bit at a time.” Approximately 50 per cent of the investment costs for both the solar air heating and the renovation of the indoor ventilation system were covered by grants and subsidies. FAB3R profited from an Energir grant targeted at solar preheating, plus an investment subsidy from the Government of Quebec’s EcoPerformance Programme.   Blue or black, but always efficient: the advanced absorber coating In October 2024, the majority of the new 2,750 m² (29,600 ft2) solar façade at FAB3R began operation (see figure 1). According to Vachon, the system is expected to cover approximately 13 per cent of the factory’s annual heating demand, which is otherwise met by natural gas. Trigo Energies equipped the façade with its high-performance Calento SL collectors, featuring a notable innovation: a selective, low-emissivity coating that withstands outdoor conditions. Introduced by Trigo in 2019 and manufactured by Almeco Group from Italy, this advanced coating is engineered to maximize solar absorption while minimizing heat loss via infrared emission, enhancing the overall efficiency of the system. The high efficiency coating is now standard in Trigo’s air heating systems. According to the manufacturer, the improved collector design shows a 25 to 35 per cent increase in yield over the former generation of solar air collectors with black paint. Testing conducted at Queen’s University confirms this performance advantage. Researchers measured the performance of transpired solar air collectors both with and without a selective coating, mounted side-by-side on a south-facing vertical wall. The results showed that the collectors with the selective coating produced 1.3 to 1.5 times more energy than those without it. In 2024, the monitoring results were jointly published by Queen’s University and Canmat Energy in a paper titled Performance Comparison of a Transpired Air Solar Collector with Low-E Surface Coating. Selective coating, also used on other solar thermal technologies including glazed flat plate or vacuum tube collectors, has a distinctive blue color. Trigo customers can, however, choose between blue and black finishes. “By going from the normal blue selective coating to black selective coating, which Almeco is specially producing for Trigo, we lose about 1 per cent in solar efficiency,” explained Vachon. Figure 2: Building-integrated solar air heating façade with MatrixAir collectors at the firehall building in Mont Saint Hilaire, south of Montreal. The 190 m2 (2,045 ft2) south-facing wall preheats the fresh air, reducing natural gas consumption by 18 per cent compared to the conventional make-up system. Architect: Leclerc Architecture. Photo: Matrix Energy Matrix Energy: collaborating with architects and engineers in new builds The key target customer group of Matrix Energy are public buildings – mainly new construction. “Since the pandemic, schools are more conscious about fresh air, and solar preheating of the incoming fresh air has a positive impact over the entire school year,” noted Brian Wilkinson, President of Matrix Energy. Matrix Energy supplies systems across Canada, working with local partners to source and process the metal sheets used in their MatrixAir collectors. These metal sheets are perforated and then formed into architectural cladding profiles. The company exclusively offers unglazed, single-stage collectors, citing fire safety concerns associated with polymeric covers. “We have strong relationships with many architects and engineers who appreciate the simplicity and cost-effectiveness of transpired solar air heating systems,” said President Brian Wilkinson, describing the company’s sales approach. “Matrix handles system design and supplies the necessary materials, while installation is carried out by specialized cladding and HVAC contractors overseen by on-site architects and engineers,” Wilkinson added. Finding the right flow: the importance of unitary airflow rates One of the key design factors in solar air heating systems is the amount of air that passes through each square meter of the perforated metal absorber,  known as the unitary airflow rate. The principle is straightforward: higher airflow rates deliver more total heat to the building, while lower flow rates result in higher outlet air temperatures. Striking the right balance between air volume and temperature gain is essential for efficient system performance. For unglazed collectors mounted on building façades, typical hourly flow rates should range between 120 and 170 (m3/h/m2), or 6.6 to 9.4 cfm/ft2. However, Wilkinson suggests that an hourly airflow rate of around 130 m³/h/m² (7.2 cfm/ft2) offers the best cost-benefit balance for building owners. If the airflow is lower, the system will deliver higher air temperatures, but it would then need a much larger collector area to achieve the same air volume and optimum performance, he explained. It’s also crucial for the flow rate to overcome external wind pressure. As wind passes over the absorber, air flow through the collector’s perforations is reduced, resulting in heat losses to the environment. This effect becomes even more pronounced in taller buildings, where wind exposure is greater. To ensure the system performs well even in these conditions, higher hourly airflow rates typically between 150 and 170 m³/m² (8.3 to 9.4 cfm/ft2)  are necessary. Figure 3: One of three apartment blocks of the Maple House in Toronto’s Canary District. Around 160 m2 (1,722 ft2) of SolarWall collectors clad the two-storey mechanical penthouse on the roof. The rental flats have been occupied since the beginning of 2024. Collaborators: architects-Alliance, Claude Cormier et Associés, Thornton Tomasetti, RWDI, Cole Engineering, DesignAgency, MVShore, BA Group, EllisDon. Photo: Conserval Engineering Solar air heating systems support LEED-certified building designs Solar air collectors are also well-suited for use in multi-unit residential buildings. A prime example is the Canary District in Toronto (see Figure 3), where single-stage SolarWall collectors from Conserval Engineering have been installed on several MURBs to clad the mechanical penthouses. “These penthouses are an ideal location for our air heating collectors, as they contain the make-up air units that supply corridor ventilation throughout the building,” explained Victoria Hollick, Vice President of Conserval Engineering. “The walls are typically finished with metal façades, which can be seamlessly replaced with a SolarWall system – maintaining the architectural language without disruption.” To date, nine solar air heating systems have been commissioned in the Canary District, covering a total collector area of over 1,000 m² (10,764 ft2). “Our customers have many motivations to integrate SolarWall technology into their new construction or retrofit projects, either carbon reduction, ESG, or green building certification targets,” explained Hollick. The use of solar air collectors in the Canary District was proposed by architects from the Danish firm Cobe. The black-colored SolarWall system preheats incoming air before it is distributed to the building’s corridors and common areas, reducing reliance on natural gas heating and supporting the pursuit of LEED Gold certification. Hollick estimates the amount of gas saved between 10 to 20 per cent of the total heating load for the corridor ventilation of the multi-unit residential buildings. Additional energy-saving strategies include a 50/50 window-to-wall ratio with high-performance glazing, green roofs, high-efficiency mechanical systems, LED lighting, and Energy Star-certified appliances. The ideal orientation for a SolarWall system is due south. However, the systems can be built at any orientation up to 90° east and west, explained Hollick. A SolarWall at 90° would have approximately 60 per cent of the energy production of the same area facing south.Canada’s expertise in solar air heating continues to set a global benchmark, driven by supporting R&D, by innovative technologies, strategic partnerships, and a growing portfolio of high-impact projects. With strong policy support and proven performance, solar air heating is poised to play a key role in the country’s energy-efficient building future. Figure 4: Claude-Bechard Building in Quebec is a showcase project for sustainable architecture with a 72 m2 (775 ft2) Lubi solar air heating wall from Aéronergie. It serves as a regional administrative center. Architectural firm: Goulet et Lebel Architectes. Photo: Art Massif Bärbel Epp is the general manager of the German Agency solrico, whose focus is on solar market research and international communication. The post Op-ed: Canada’s leadership in solar air heating—Innovation and flagship projects appeared first on Canadian Architect.
    0 Commentarii 0 Distribuiri 0 previzualizare
Sponsorizeaza Paginile
CGShares https://cgshares.com