• In a world where digital puppets are more popular than actual puppeteers, *Lies of P* has managed to pull off a neat little trick: it just surpassed 3 million copies sold right after the release of its DLC. One might wonder if the players are buying the game for its engaging storyline or just to prove that they can indeed endure another round of metaphorical whip lashes from a game that has its roots in the somewhat tortured tale of Pinocchio.

    Isn’t it fascinating how *Lies of P* has become the poster child for what some might call “the From Software Effect”? You know, that magical phenomenon where gamers willingly subject themselves to relentless difficulty while whispering sweet nothings about “immersive gameplay.” Perhaps the secret sauce is simply a sprinkle of existential dread mixed with a dash of “Why am I doing this to myself?”

    Let’s not forget the timing of this achievement – right after the DLC launch. Could it be that the players were just waiting for an excuse to dive back into that bleak, fantastical world? Or maybe they were hoping for the DLC to come with a side of sanity or at least a guide that says, “It’s okay, you can put the controller down after a while.” But no, why would anyone want a game that respects their time?

    Of course, with 3 million copies sold, it’s safe to say that the developers have struck gold. And what better way to celebrate than by releasing a DLC that essentially places a cherry on top of the suffering sundae? Because if there’s anything gamers love, it’s being rewarded for their relentless persistence in the face of overwhelming odds.

    And let’s take a moment to appreciate the irony here. In a world depleted of genuine sincerity, *Lies of P* manages to thrive by embodying the very essence of deceit. Is it a game about lying? Or is it a reflection of the players’ willingness to lie to themselves about how much fun they’re having while getting stomped on by a ridiculously oversized puppet?

    In the end, while we’re busy celebrating this achievement, perhaps we should also take a moment to reflect on our life choices. Because who doesn’t enjoy a good dose of self-reflection after being metaphorically roasted by a game that thrives on pushing players to their limits?

    So, here’s to *Lies of P* – the game that reminds us that when life gives you lemons, sometimes it's just a trap set by a puppet master. Cheers to the 3 million players who have chosen to embrace the lie!

    #LiesOfP #GamingNews #DLC #FromSoftware #GamingCommunity
    In a world where digital puppets are more popular than actual puppeteers, *Lies of P* has managed to pull off a neat little trick: it just surpassed 3 million copies sold right after the release of its DLC. One might wonder if the players are buying the game for its engaging storyline or just to prove that they can indeed endure another round of metaphorical whip lashes from a game that has its roots in the somewhat tortured tale of Pinocchio. Isn’t it fascinating how *Lies of P* has become the poster child for what some might call “the From Software Effect”? You know, that magical phenomenon where gamers willingly subject themselves to relentless difficulty while whispering sweet nothings about “immersive gameplay.” Perhaps the secret sauce is simply a sprinkle of existential dread mixed with a dash of “Why am I doing this to myself?” Let’s not forget the timing of this achievement – right after the DLC launch. Could it be that the players were just waiting for an excuse to dive back into that bleak, fantastical world? Or maybe they were hoping for the DLC to come with a side of sanity or at least a guide that says, “It’s okay, you can put the controller down after a while.” But no, why would anyone want a game that respects their time? Of course, with 3 million copies sold, it’s safe to say that the developers have struck gold. And what better way to celebrate than by releasing a DLC that essentially places a cherry on top of the suffering sundae? Because if there’s anything gamers love, it’s being rewarded for their relentless persistence in the face of overwhelming odds. And let’s take a moment to appreciate the irony here. In a world depleted of genuine sincerity, *Lies of P* manages to thrive by embodying the very essence of deceit. Is it a game about lying? Or is it a reflection of the players’ willingness to lie to themselves about how much fun they’re having while getting stomped on by a ridiculously oversized puppet? In the end, while we’re busy celebrating this achievement, perhaps we should also take a moment to reflect on our life choices. Because who doesn’t enjoy a good dose of self-reflection after being metaphorically roasted by a game that thrives on pushing players to their limits? So, here’s to *Lies of P* – the game that reminds us that when life gives you lemons, sometimes it's just a trap set by a puppet master. Cheers to the 3 million players who have chosen to embrace the lie! #LiesOfP #GamingNews #DLC #FromSoftware #GamingCommunity
    Juste après la sortie de son DLC, Lies of P dépasse les 3 millions d’exemplaires
    ActuGaming.net Juste après la sortie de son DLC, Lies of P dépasse les 3 millions d’exemplaires Sans doute l’une des meilleures alternatives aux jeux de From Software, Lies of P a […] L'article Juste après la sortie de son DLC, Lie
    Like
    Love
    Wow
    Angry
    Sad
    162
    1 Комментарии 0 Поделились
  • Publishing your first manga might sound exciting, but honestly, it’s just a lot of work. It’s one of those things that you think will be fun, but then you realize it’s just a long journey filled with endless sketches and revisions. Six top manga artists talk about their experiences, but let’s be real, it’s not all that thrilling.

    First off, you have to come up with a story. Sounds easy, right? But then you sit there staring at a blank page, and the ideas just don’t come. You read what other artists say about their success, and it makes you feel like you should have everything figured out. They talk about characters and plots like it’s the easiest thing in the world. But between you and me, it’s exhausting.

    Then comes the drawing part. Sure, you might enjoy sketching sometimes, but doing it for hours every day? That’s where the fun starts to fade. You’ll probably go through phases where you hate your own art. It’s a cycle of drawing, erasing, and feeling disappointed. It’s not a glamorous process; it’s just a grind.

    After you’ve finally got something that resembles a story and some pages that are somewhat decent, you have to think about publishing. This is where the anxiety kicks in. Do you self-publish? Try to find a publisher? Each option has its own set of problems. You read advice from those six artists, and they all sound like they’ve got it figured out. But honestly, who has the energy to deal with all those logistics?

    Marketing is another thing. They say you need to promote yourself, build a following, and all that jazz. But scrolling through social media to post about your manga feels more like a chore than a fun activity. You might think you’ll enjoy it, but it’s just more work piled on top of everything else.

    In the end, the best advice might be to just get through it and hope for the best. You’ll survive the experience, maybe even learn something, but it’s not going to be a walk in the park. If you’re looking for a carefree journey, publishing your first manga probably isn’t it.

    So, yeah. That’s the reality. It’s not as glamorous as it sounds. You just do it, and hope that someday it might feel rewarding. But until then, it’s just a lot of waiting and wondering. Good luck, I guess.

    #Manga #Publishing #MangaArtists #Comics #ArtProcess
    Publishing your first manga might sound exciting, but honestly, it’s just a lot of work. It’s one of those things that you think will be fun, but then you realize it’s just a long journey filled with endless sketches and revisions. Six top manga artists talk about their experiences, but let’s be real, it’s not all that thrilling. First off, you have to come up with a story. Sounds easy, right? But then you sit there staring at a blank page, and the ideas just don’t come. You read what other artists say about their success, and it makes you feel like you should have everything figured out. They talk about characters and plots like it’s the easiest thing in the world. But between you and me, it’s exhausting. Then comes the drawing part. Sure, you might enjoy sketching sometimes, but doing it for hours every day? That’s where the fun starts to fade. You’ll probably go through phases where you hate your own art. It’s a cycle of drawing, erasing, and feeling disappointed. It’s not a glamorous process; it’s just a grind. After you’ve finally got something that resembles a story and some pages that are somewhat decent, you have to think about publishing. This is where the anxiety kicks in. Do you self-publish? Try to find a publisher? Each option has its own set of problems. You read advice from those six artists, and they all sound like they’ve got it figured out. But honestly, who has the energy to deal with all those logistics? Marketing is another thing. They say you need to promote yourself, build a following, and all that jazz. But scrolling through social media to post about your manga feels more like a chore than a fun activity. You might think you’ll enjoy it, but it’s just more work piled on top of everything else. In the end, the best advice might be to just get through it and hope for the best. You’ll survive the experience, maybe even learn something, but it’s not going to be a walk in the park. If you’re looking for a carefree journey, publishing your first manga probably isn’t it. So, yeah. That’s the reality. It’s not as glamorous as it sounds. You just do it, and hope that someday it might feel rewarding. But until then, it’s just a lot of waiting and wondering. Good luck, I guess. #Manga #Publishing #MangaArtists #Comics #ArtProcess
    How to publish your first manga (and survive the experience)
    Six top manga artists reveal the secrets behind their success
    Like
    Love
    Wow
    Angry
    Sad
    451
    1 Комментарии 0 Поделились
  • Ankur Kothari Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns.
    But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic.
    This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results.
    Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.

     
    Ankur Kothari Q&A Interview
    1. What types of customer engagement data are most valuable for making strategic business decisions?
    Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns.
    Second would be demographic information: age, location, income, and other relevant personal characteristics.
    Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews.
    Fourth would be the customer journey data.

    We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data.

    2. How do you distinguish between data that is actionable versus data that is just noise?
    First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance.
    Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in.

    You also want to make sure that there is consistency across sources.
    Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory.
    Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy.

    By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions.

    3. How can customer engagement data be used to identify and prioritize new business opportunities?
    First, it helps us to uncover unmet needs.

    By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points.

    Second would be identifying emerging needs.
    Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly.
    Third would be segmentation analysis.
    Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies.
    Last is to build competitive differentiation.

    Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions.

    4. Can you share an example of where data insights directly influenced a critical decision?
    I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings.
    We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms.
    That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs.

    That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial.

    5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time?
    When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences.
    We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments.
    Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content.

    With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns.

    6. How are you doing the 1:1 personalization?
    We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer.
    So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer.
    That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience.

    We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers.

    7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service?
    Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved.
    The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments.

    Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention.

    So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization.

    8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights?
    I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights.

    Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement.

    Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant.
    As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively.
    So there’s a lack of understanding of marketing and sales as domains.
    It’s a huge effort and can take a lot of investment.

    Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing.

    9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data?
    If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge.
    Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side.

    Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important.

    10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before?
    First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do.
    And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations.
    The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it.

    Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one.

    11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations?
    We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI.
    We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals.

    We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization.

    12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data?
    I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points.
    Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us.
    We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels.
    Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms.

    Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps.

    13. How do you ensure data quality and consistency across multiple channels to make these informed decisions?
    We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies.
    While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing.
    We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats.

    On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically.

    14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years?
    The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices.
    Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities.
    We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases.
    As the world is collecting more data, privacy concerns and regulations come into play.
    I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies.
    And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture.

    So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.

     
    This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #ankur #kothari #qampampa #customer #engagement
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage. #ankur #kothari #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question (and many others), we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    Like
    Love
    Wow
    Angry
    Sad
    478
    0 Комментарии 0 Поделились
  • This Week's Tips For Helldivers 2, Monster Hunter Wilds, Oblivion Remastered, And More

    Start SlideshowStart SlideshowImage: The Pokémon Company, Arrowhead Game Studios, Blizzard, The Pokémon Company, Screenshot: Capcom / Samuel Moreno / Kotaku, Bethesda / Brandon Morgan / Kotaku, Nintendo, Bethesda / Brandon Morgan / Kotaku, Capcom / Samuel Moreno / KotakuYou know what we all need sometimes? A little advice. How do I plan for a future that’s so uncertain? Will AI take my job? If I go back to school and use AI to cheat, will I graduate and work for an AI boss? We can’t help you with any of that. But what we can do is provide some tips for Helldivers 2, Monster Hunter Wilds, Oblivion Remastered, and other great games. So, read on for that stuff, and maybe ask ChatGPT about those other things.Previous SlideNext SlideList slidesDon’t Rely On Ex Pokémon In Pokémon TCG Pocket AnymoreImage: The Pokémon CompanyDuring the initial months of Pokémon TCG Pocket, ex monsters dominated the competitive landscape. These monsters arestronger than their non-ex counterparts, and they can come with game-changing abilities that determine how your entire deck plays. In the past, players could create frustratingly fearsome decks consisting of two ex Pokémon supported by trainer and item cards. However, unless you pair together very specific ex Pokémon, you’ll now find yourself losing nearly every game you play. - Timothy Monbleau Read MorePrevious SlideNext SlideList slidesPlease, For The Love Of God, Defeat All Illuminate Stingrays In Helldivers 2Image: Arrowhead Game StudiosYou know what? Screw the Illuminate. I played round after round trying to get the Stingrays, also known as an Interloper, to spawn at least once, and those damn Overseers and Harvesters kept walking up and rocking me. In the end, I was victorious. A Stingray approached the airspace with reckless abandon, swooping in with practiced ease as it unloaded a barrage of molten death beams upon my head, and you know what happened? I died. A few times. But eventually, I managed to pop a shot off and I quickly discovered how to defeat Illuminate Stingrays in Helldivers 2. - Brandon Morgan Read MorePrevious SlideNext SlideList slidesDefeating Monster Hunter Wilds’ Demi Elder Dragon Might Be The Game’s Hardest Challenge So FarScreenshot: Capcom / Samuel Moreno / KotakuAlthough Zoh Shia is the thematic boss of Monster Hunter Wilds, other beasts can put up a tougher fight. Gore Magalaare easily in contention for being the most deadly enemies in the game. Not much is more threatening than their high mobility, powerful attacks, and unique Frenzy ailment that forms the basis for your Corrupted Mantle. - Samuel Moreno Read MorePrevious SlideNext SlideList slidesDon’t Forget To Play ‘The Shivering Isles’ Expansion In Oblivion RemasteredScreenshot: Bethesda / Brandon Morgan / KotakuWhether you’ve played the original Oblivion or not, chances are you’ve heard tales of the oddities awaiting you in the Shivering Isles. This expansion—the largest one for the open-world RPG—features a land of madness under the unyielding control of Sheogorath. It’s a beautiful world, yet so immensely wrong. But that’s why this DLC is one of the best in the franchise, so no matter how many hours you may have already put into the main story and the main world, you don’t want to miss this expansion. - Brandon Morgan Read MorePrevious SlideNext SlideList slidesHow Long Of A Ride Is Mario Kart World?Screenshot: NintendoThe Mario Kart franchise has been entertaining us all for decades—even with sibling fights and fits of rage over losing a race from a blue shell at the last second—but Mario Kart World is the first game to go open world. There hasn’t been a truly new entry in the series since 2014's Mario Kart 8, so being stoked to dive into this exciting adventure is perfectly reasonable. Equally reasonable, especially given the game’s controversial price tag, is to wonder how long it’ll take to beat and what type of replayability it offers. Let’s talk about it. - Billy Givens Read MorePrevious SlideNext SlideList slidesMario Kart World Players Are Exploiting Free Roam To Quickly Farm CoinsGif: Nintendo / FannaWuck / KotakuMario Kart World is full of cool stunts and lots of things to unlock, like new characters, costumes, and vehicles. The last of those requires accumulating a certain number of coins during your time with the Switch 2 exclusive, and while you could do that the normal way by just playing tons of races, you can also use the latest entry’s open world to farm coins faster or even while being completely AFK. - Ethan Gach Read MorePrevious SlideNext SlideList slidesOblivion Remastered’s Best Side Quest Is A World Within A WorldScreenshot: Bethesda / Brandon Morgan / KotakuIt’s been a long time since I kept a spreadsheet for a video game, or even notes beyond what I need for work. I had one for the original Oblivion run back in my school days. Back then, I knew where to find every side quest in the game. There were over 250. Still are, but now they’re enhanced, beautified for the modern gamer. One side quest retains its crown as the best, despite the game’s age. “A Brush With Death” is Oblivion Remastered’s best side quest by far, and here’s how to find and beat it! - Brandon Morgan Read MorePrevious SlideNext SlideList slidesDiablo IV: How To Power Level Your Way To Season 8's EndgameImage: BlizzardWhether you’re running a new build, trying out a new class, or returning to Diablo IV after an extended break,Whatever the case, learning how to level up fast in Diablo IV should help you check out everything new this season, along with hitting endgame so that your friends don’t cruelly make fun of you! - Brandon Morgan Read MorePrevious SlideNext SlideList slidesThe 5 Strongest Non-Ex Pokémon To Use In Pokémon TCG PocketImage: The Pokémon CompanyIt’s official: ex Pokémon no longer rule unchallenged Pokémon TCG Pocket. While these powerful cards are still prevalent in the competitive landscape, the rise of ex-specific counters have made many of these monsters risky to bring. It’s never been more vital to find strong Pokémon that are unburdened by the ex label, but who should you use? - Timothy Monbleau Read MorePrevious SlideNext SlideList slidesSome Of The Coolest Monster Hunter Wilds Armor Can Be Yours If You Collect Enough CoinsScreenshot: Capcom / Samuel Moreno / KotakuIt goes without saying that Monster Hunter Wilds has a lot of equipment materials to keep track of. The Title 1 Update increased the amount with the likes of Mizutsune parts and the somewhat obscurely named Pinnacle Coins. While it’s easy to know what the monster parts can be used for, the same can’t be said for a coin. Making things more complicated is that the related equipment isn’t unlocked all at once. - Samuel Moreno Read More
    #this #week039s #tips #helldivers #monster
    This Week's Tips For Helldivers 2, Monster Hunter Wilds, Oblivion Remastered, And More
    Start SlideshowStart SlideshowImage: The Pokémon Company, Arrowhead Game Studios, Blizzard, The Pokémon Company, Screenshot: Capcom / Samuel Moreno / Kotaku, Bethesda / Brandon Morgan / Kotaku, Nintendo, Bethesda / Brandon Morgan / Kotaku, Capcom / Samuel Moreno / KotakuYou know what we all need sometimes? A little advice. How do I plan for a future that’s so uncertain? Will AI take my job? If I go back to school and use AI to cheat, will I graduate and work for an AI boss? We can’t help you with any of that. But what we can do is provide some tips for Helldivers 2, Monster Hunter Wilds, Oblivion Remastered, and other great games. So, read on for that stuff, and maybe ask ChatGPT about those other things.Previous SlideNext SlideList slidesDon’t Rely On Ex Pokémon In Pokémon TCG Pocket AnymoreImage: The Pokémon CompanyDuring the initial months of Pokémon TCG Pocket, ex monsters dominated the competitive landscape. These monsters arestronger than their non-ex counterparts, and they can come with game-changing abilities that determine how your entire deck plays. In the past, players could create frustratingly fearsome decks consisting of two ex Pokémon supported by trainer and item cards. However, unless you pair together very specific ex Pokémon, you’ll now find yourself losing nearly every game you play. - Timothy Monbleau Read MorePrevious SlideNext SlideList slidesPlease, For The Love Of God, Defeat All Illuminate Stingrays In Helldivers 2Image: Arrowhead Game StudiosYou know what? Screw the Illuminate. I played round after round trying to get the Stingrays, also known as an Interloper, to spawn at least once, and those damn Overseers and Harvesters kept walking up and rocking me. In the end, I was victorious. A Stingray approached the airspace with reckless abandon, swooping in with practiced ease as it unloaded a barrage of molten death beams upon my head, and you know what happened? I died. A few times. But eventually, I managed to pop a shot off and I quickly discovered how to defeat Illuminate Stingrays in Helldivers 2. - Brandon Morgan Read MorePrevious SlideNext SlideList slidesDefeating Monster Hunter Wilds’ Demi Elder Dragon Might Be The Game’s Hardest Challenge So FarScreenshot: Capcom / Samuel Moreno / KotakuAlthough Zoh Shia is the thematic boss of Monster Hunter Wilds, other beasts can put up a tougher fight. Gore Magalaare easily in contention for being the most deadly enemies in the game. Not much is more threatening than their high mobility, powerful attacks, and unique Frenzy ailment that forms the basis for your Corrupted Mantle. - Samuel Moreno Read MorePrevious SlideNext SlideList slidesDon’t Forget To Play ‘The Shivering Isles’ Expansion In Oblivion RemasteredScreenshot: Bethesda / Brandon Morgan / KotakuWhether you’ve played the original Oblivion or not, chances are you’ve heard tales of the oddities awaiting you in the Shivering Isles. This expansion—the largest one for the open-world RPG—features a land of madness under the unyielding control of Sheogorath. It’s a beautiful world, yet so immensely wrong. But that’s why this DLC is one of the best in the franchise, so no matter how many hours you may have already put into the main story and the main world, you don’t want to miss this expansion. - Brandon Morgan Read MorePrevious SlideNext SlideList slidesHow Long Of A Ride Is Mario Kart World?Screenshot: NintendoThe Mario Kart franchise has been entertaining us all for decades—even with sibling fights and fits of rage over losing a race from a blue shell at the last second—but Mario Kart World is the first game to go open world. There hasn’t been a truly new entry in the series since 2014's Mario Kart 8, so being stoked to dive into this exciting adventure is perfectly reasonable. Equally reasonable, especially given the game’s controversial price tag, is to wonder how long it’ll take to beat and what type of replayability it offers. Let’s talk about it. - Billy Givens Read MorePrevious SlideNext SlideList slidesMario Kart World Players Are Exploiting Free Roam To Quickly Farm CoinsGif: Nintendo / FannaWuck / KotakuMario Kart World is full of cool stunts and lots of things to unlock, like new characters, costumes, and vehicles. The last of those requires accumulating a certain number of coins during your time with the Switch 2 exclusive, and while you could do that the normal way by just playing tons of races, you can also use the latest entry’s open world to farm coins faster or even while being completely AFK. - Ethan Gach Read MorePrevious SlideNext SlideList slidesOblivion Remastered’s Best Side Quest Is A World Within A WorldScreenshot: Bethesda / Brandon Morgan / KotakuIt’s been a long time since I kept a spreadsheet for a video game, or even notes beyond what I need for work. I had one for the original Oblivion run back in my school days. Back then, I knew where to find every side quest in the game. There were over 250. Still are, but now they’re enhanced, beautified for the modern gamer. One side quest retains its crown as the best, despite the game’s age. “A Brush With Death” is Oblivion Remastered’s best side quest by far, and here’s how to find and beat it! - Brandon Morgan Read MorePrevious SlideNext SlideList slidesDiablo IV: How To Power Level Your Way To Season 8's EndgameImage: BlizzardWhether you’re running a new build, trying out a new class, or returning to Diablo IV after an extended break,Whatever the case, learning how to level up fast in Diablo IV should help you check out everything new this season, along with hitting endgame so that your friends don’t cruelly make fun of you! - Brandon Morgan Read MorePrevious SlideNext SlideList slidesThe 5 Strongest Non-Ex Pokémon To Use In Pokémon TCG PocketImage: The Pokémon CompanyIt’s official: ex Pokémon no longer rule unchallenged Pokémon TCG Pocket. While these powerful cards are still prevalent in the competitive landscape, the rise of ex-specific counters have made many of these monsters risky to bring. It’s never been more vital to find strong Pokémon that are unburdened by the ex label, but who should you use? - Timothy Monbleau Read MorePrevious SlideNext SlideList slidesSome Of The Coolest Monster Hunter Wilds Armor Can Be Yours If You Collect Enough CoinsScreenshot: Capcom / Samuel Moreno / KotakuIt goes without saying that Monster Hunter Wilds has a lot of equipment materials to keep track of. The Title 1 Update increased the amount with the likes of Mizutsune parts and the somewhat obscurely named Pinnacle Coins. While it’s easy to know what the monster parts can be used for, the same can’t be said for a coin. Making things more complicated is that the related equipment isn’t unlocked all at once. - Samuel Moreno Read More #this #week039s #tips #helldivers #monster
    KOTAKU.COM
    This Week's Tips For Helldivers 2, Monster Hunter Wilds, Oblivion Remastered, And More
    Start SlideshowStart SlideshowImage: The Pokémon Company, Arrowhead Game Studios, Blizzard, The Pokémon Company, Screenshot: Capcom / Samuel Moreno / Kotaku, Bethesda / Brandon Morgan / Kotaku, Nintendo, Bethesda / Brandon Morgan / Kotaku, Capcom / Samuel Moreno / KotakuYou know what we all need sometimes? A little advice. How do I plan for a future that’s so uncertain? Will AI take my job? If I go back to school and use AI to cheat, will I graduate and work for an AI boss? We can’t help you with any of that. But what we can do is provide some tips for Helldivers 2, Monster Hunter Wilds, Oblivion Remastered, and other great games. So, read on for that stuff, and maybe ask ChatGPT about those other things.Previous SlideNext SlideList slidesDon’t Rely On Ex Pokémon In Pokémon TCG Pocket AnymoreImage: The Pokémon CompanyDuring the initial months of Pokémon TCG Pocket, ex monsters dominated the competitive landscape. These monsters are (usually) stronger than their non-ex counterparts, and they can come with game-changing abilities that determine how your entire deck plays. In the past, players could create frustratingly fearsome decks consisting of two ex Pokémon supported by trainer and item cards. However, unless you pair together very specific ex Pokémon, you’ll now find yourself losing nearly every game you play. - Timothy Monbleau Read MorePrevious SlideNext SlideList slidesPlease, For The Love Of God, Defeat All Illuminate Stingrays In Helldivers 2Image: Arrowhead Game StudiosYou know what? Screw the Illuminate. I played round after round trying to get the Stingrays, also known as an Interloper, to spawn at least once, and those damn Overseers and Harvesters kept walking up and rocking me. In the end, I was victorious. A Stingray approached the airspace with reckless abandon, swooping in with practiced ease as it unloaded a barrage of molten death beams upon my head, and you know what happened? I died. A few times. But eventually, I managed to pop a shot off and I quickly discovered how to defeat Illuminate Stingrays in Helldivers 2. - Brandon Morgan Read MorePrevious SlideNext SlideList slidesDefeating Monster Hunter Wilds’ Demi Elder Dragon Might Be The Game’s Hardest Challenge So FarScreenshot: Capcom / Samuel Moreno / KotakuAlthough Zoh Shia is the thematic boss of Monster Hunter Wilds, other beasts can put up a tougher fight. Gore Magala (and especially its Tempered version) are easily in contention for being the most deadly enemies in the game. Not much is more threatening than their high mobility, powerful attacks, and unique Frenzy ailment that forms the basis for your Corrupted Mantle. - Samuel Moreno Read MorePrevious SlideNext SlideList slidesDon’t Forget To Play ‘The Shivering Isles’ Expansion In Oblivion RemasteredScreenshot: Bethesda / Brandon Morgan / KotakuWhether you’ve played the original Oblivion or not, chances are you’ve heard tales of the oddities awaiting you in the Shivering Isles. This expansion—the largest one for the open-world RPG—features a land of madness under the unyielding control of Sheogorath. It’s a beautiful world, yet so immensely wrong. But that’s why this DLC is one of the best in the franchise, so no matter how many hours you may have already put into the main story and the main world, you don’t want to miss this expansion. - Brandon Morgan Read MorePrevious SlideNext SlideList slidesHow Long Of A Ride Is Mario Kart World?Screenshot: NintendoThe Mario Kart franchise has been entertaining us all for decades—even with sibling fights and fits of rage over losing a race from a blue shell at the last second—but Mario Kart World is the first game to go open world. There hasn’t been a truly new entry in the series since 2014's Mario Kart 8, so being stoked to dive into this exciting adventure is perfectly reasonable. Equally reasonable, especially given the game’s controversial price tag, is to wonder how long it’ll take to beat and what type of replayability it offers. Let’s talk about it. - Billy Givens Read MorePrevious SlideNext SlideList slidesMario Kart World Players Are Exploiting Free Roam To Quickly Farm CoinsGif: Nintendo / FannaWuck / KotakuMario Kart World is full of cool stunts and lots of things to unlock, like new characters, costumes, and vehicles. The last of those requires accumulating a certain number of coins during your time with the Switch 2 exclusive, and while you could do that the normal way by just playing tons of races, you can also use the latest entry’s open world to farm coins faster or even while being completely AFK. - Ethan Gach Read MorePrevious SlideNext SlideList slidesOblivion Remastered’s Best Side Quest Is A World Within A WorldScreenshot: Bethesda / Brandon Morgan / KotakuIt’s been a long time since I kept a spreadsheet for a video game, or even notes beyond what I need for work. I had one for the original Oblivion run back in my school days. Back then, I knew where to find every side quest in the game. There were over 250. Still are, but now they’re enhanced, beautified for the modern gamer. One side quest retains its crown as the best, despite the game’s age. “A Brush With Death” is Oblivion Remastered’s best side quest by far, and here’s how to find and beat it! - Brandon Morgan Read MorePrevious SlideNext SlideList slidesDiablo IV: How To Power Level Your Way To Season 8's EndgameImage: BlizzardWhether you’re running a new build, trying out a new class, or returning to Diablo IV after an extended break, (a break in which you were likely playing Path of Exile 2, right? I know I wasn’t alone in farming Exalted Orbs!) Whatever the case, learning how to level up fast in Diablo IV should help you check out everything new this season, along with hitting endgame so that your friends don’t cruelly make fun of you! - Brandon Morgan Read MorePrevious SlideNext SlideList slidesThe 5 Strongest Non-Ex Pokémon To Use In Pokémon TCG PocketImage: The Pokémon CompanyIt’s official: ex Pokémon no longer rule unchallenged Pokémon TCG Pocket. While these powerful cards are still prevalent in the competitive landscape, the rise of ex-specific counters have made many of these monsters risky to bring. It’s never been more vital to find strong Pokémon that are unburdened by the ex label, but who should you use? - Timothy Monbleau Read MorePrevious SlideNext SlideList slidesSome Of The Coolest Monster Hunter Wilds Armor Can Be Yours If You Collect Enough CoinsScreenshot: Capcom / Samuel Moreno / KotakuIt goes without saying that Monster Hunter Wilds has a lot of equipment materials to keep track of. The Title 1 Update increased the amount with the likes of Mizutsune parts and the somewhat obscurely named Pinnacle Coins. While it’s easy to know what the monster parts can be used for, the same can’t be said for a coin. Making things more complicated is that the related equipment isn’t unlocked all at once. - Samuel Moreno Read More
    Like
    Love
    Wow
    Sad
    Angry
    391
    0 Комментарии 0 Поделились
  • NOOBS ARE COMING (Demo) [Free] [Action] [Windows] [Linux]

    SirCozyCrow5 hours agoThe sound track is PEAK! I loved playing this, and my partner who normally doesn't play games like this one had a good time as well. I enjoyed the learning curve and I can't wait to play the harder difficulties.Here's a video I made, my partner jumped in for a few minutes as well.Replyso funReplyDrew.a.Chain1 day agoVery addictive!ReplyTrashpanda1191 day agolove the playstyle and the art style definitly fun to play plus the music is the cherry on topReplyAhoOppai1 day agoreally fun game cant wait for the full gameReplyDin Xavier coding1 day agoI chose the laser eye. How do I turn the attack around? Can I even do that?Replyoverboy1 day agoHey, the laser eye gets a random direction at the start of each wave, it's one of the specificities of this attack ;)ReplyFort Kenmei1 day agoGameplay and Critique ;)Replyoverboy1 day agoThanks a lot for the awesome video and the feedback! :)ReplyTLGaby2 days agoJust to know browser progress keep getting reset.Replyoverboy1 day agoThanks for the report! Could it be due to some of your browser settings?Unfortunately, browser-based games can't always guarantee reliable local saves due to how browsers handle storage.To avoid this in the future, I recommend trying the downloadable version of the demo,  it provides a more stable environment for saving progress. :)Replyepic.Replyoleekconder2 days agoVery nice. Spent couple hours easy=) UPD: And some moreReplyMaximusR3 days agoes un juego que ya jugue en su momento cuando tenias menos cosas y ahora que esta actualizado quisiera grabarlo otra vezReplyEPIClove the spiders ♥ReplynineGardens3 days agoOkay so.... tried out a few things, and some Dev suggestions to report:
    Bigfoot is such a cool idea, and running around at that speed with like.... all THAT going on just gave me motion sickness.Summoner is hysterical fun. All hail spiders. Tomatoe's are pretty fun too.The Adept is so cool in theory, but... once you have the right build is a bit of a "standing still simulator"  Also, if you have totoms or other turrets, there's very much the question each round of "Will my circle spawn NEAR the totoms , or far from them "   I kind of wonder if the mage circle should like... fizzle out after 20 seconds and appear somewhere else. Just... something to give a bit more dynamism, and to make the original spawn point less critical.Okay: added thoughts:Watering psycotic tomatoes feels great.Being a malevolent spider with 8 arms feels amazing. Feels very good and natural."Orbital" is one of the greatest and most fun abilities in the game.  I would take this even without the damage boost.Lots of fun, but also very silly. Good job.Replydave99993 days agowith some size you can kick the totems around to reposition them towards your circle, it benefits them too, adept can choose the wand at the start and with it you have no sustain problem anyway whatever build you want to set upReplynineGardens3 days agoOh damn- only just found out you can kick the totems!Okay, yeah in this case all is well. Or at least.... I still think a moving circle could be cool, but the fact that you can move your totems over to where the circle is makes things much better.Replyjust get enough amount+size and they hit everything, bounce is overkill ReplyLost track of time 10 hours in and still hooked. Absolutely love it! Can't wait for the full releaseReplyDriftedVoid4 days agoPretty good!
    ReplyIndyot4 days agoIt's a pretty addictive game, congrats! I lowkey missed a bit of satisfaction on the weapons though.ReplyCongrats on the game! I really like the weapons that you interact with which gives it a fun spin.Reply1Soultaken4 days agoAnyone know good combos for the items?Replydave99994 days agolasers plus amount+adept some arcane for basic dmgtotems +amount+ bounce+adept optional size and arcane you can stand still in the endall shovels with crit, strength their extra souls help you snowball hard and easy probably the most straightforward and stable very good build you can beat the game with nearly anything its well balanced but this one is very strong and easy soul flask, more chests are near always must pick, the high luck value ones give you better items the free reroll is a must pick, lightning dagger is somewhat unique as it  can carry you the entire early game even if you do not get enough element damageReplydave99998 days agounderestimated totems Replylimey8 days agoi like how you made like MULTITUDES of updates on this so like as soon as i check my feed its just thisReplydave99998 days agomy best run so far,  there s a hidden mechanic that  makes weapons  you have more likely to drop?Replyoverboy8 days agoLmao, awesome — looks like a really fun build to play! Yeah, Shop RNG uses a lot of hidden tricks to help you find relevant attacks, while still allowing unrelated ones to appear. That way, you can discover unique builds and experiment freely!Replyoverboy8 days agoThank you so much for the incredible reception of the web demo on Itch, and to everyone who wishlisted the game! Many of the changes—along with much more to come in future updates—are directly based on your feedback here and on the game’s Discord.

    I’m also excited to announce that the game will release on Steam on 8 July 2025!
    Demo - Update 35Singleplayer UI: Level Up Upgrade Phase and Chest Pickup Phase UI now display the items and attacks inventoriesSingleplayer Shop: subtle animation while selecting a Buy Button
    Many Balancing tweaks
    Balancing: nerfed Life Steal in various waysBalancing: nerfed Knockback in various waysBalancing: too much items enhancing HP Max were put in the Demo, this means it was easier to get a lot of HP and to survive in the Demo due to higher ratio of items providing HP
    Added a subtle duration during which the player can still pickup Souls even if they’re slurped by the Soul Portal
    Fine tuned the color of some weapons to improve the visibility
    Balancing: Ballista don’t double their projectiles based on amount anymoreIf Player HP is Full and HP Max > 20, the player can’t be one-shot
    Bugfix: in-game achievement pop up could be displayed below other UI elements while it should always be above everything else
    Potential Bugfix for a rare bug happening in Multiplayer shop where player2 Shop sections wasn’t displayed at allRework the save system in preparation for upcoming features
    ReplyxHELLO_WORLDx10 days agocontracts on the gameReplydave999910 days agoelijah_ap10 days agoLove the art style, upgrades, controls, etc. Balance might be the only thing off about this. If you were to add anything, I would want to see more variety in the stages, similar to Vampire Survivor. Otherwise- really great.ReplyThank you so much! I’ll keep working on the balance with each update, and I appreciate the suggestion on stage variety!ReplyNetsmile10 days agoTorch IV has a problem rounding numbers in the stats hover over display. Other levels of torches workReplyoverboy10 days agoThanks, I'll fix this displayed rounding number issue soon!ReplySkeppartorsk10 days agoFor now I'd say it's fun, but lacking a bit in balance. I absolutely suck at brotatolikes. But find this one easy, so it's probably undertuned as far as difficulty is concerned. The power and availability of HP and regen items, makes you just literally not care if you get hit. Then the relatively strong armor on top and you're just too tanky for anything to feasibly ever kill you.Replyoverboy10 days agoThanks for the feedback! Sounds like tanky builds might be a bit too forgiving right now, i'll do some balancing changesReplySkeppartorsk9 days agoLife steal has similar issues too. There's also the standard issue with knockback in these kinds of games. The lack of any enemy resistance/diminishing returns, means it's way too easy to get enough knockback that enemies cannot touch you anymore. Ranged attacks are too few and far between to worry about with the current levels of sustain. Meaning you can just Stand Still and Kill way too realiably.
    Edit: Lategame with 6x Wands I'm getting so much screen shake it's triggering simulation sickness. It was due to having Pierce + Bounce. The screen shake from my projectiles bouncing off the edge of the map.Replyoverboy8 days agothanks for your feedback, it will help for the game balancing!For now I try to avoid diminishing returns by design to make sure each feature and stat is super easy to understand because I dislike when roguelike gets too opaque, I prefer that the player fully and easily undestand each of its choices, but yeah that involves a good balance to find!In future updates, Life Steal will become harder to get, Knockback will be capped at lower maximum applied values.Regarding the overall difficulty, the full version has 3 extra level of difficulties, and based on some feedbacks i have from beta testers, the balance between the 5 difficulty modes seem to be close to what i'm aiming forThere is already an option to disable screenshakes ;)Edit: Would you be interested to join the beta-test of the full game? If so please join the Discord and ping me in DM ;)ReplySkeppartorsk8 days agoI did notice that you could turn off screen shake entirely. But admittedly a lot of the visceral feel of the combat goes away when you fully disable the screen shake. But when you have too many Leeroy/knockback projectiles/bouncing projectiles. It just reaches the point where simulation sickness sets in. Wish there was something like an intensity setting, or a way for it to cap out at how often a screen shake can get triggered.
    I agree on the opaque thing. But I was more thinking something akin to how CC Diminishing Returns works in WoW. Where 1st hit = full value, 2nd hit within 10s = half value, 3rd hit = 1/4 value. Then 10s of immunity before it resets. That way you still get knockback when you pick knockback. But you can't just perma nail enemies against the wall.
    Edit: Also there's a wording issuewith how multiple pentagrams work. If you have adept pentagram and the item pentagram the wording is "when you stand inside a pentagram" But the item one gives the 20% damage ONLY and the adept one gives the adept bonuses ONLY. The wording would mean that both pentagrams should give adept bonus AND 20% damage bonus.Edit2: I'd suggest reformatting Grimorius tooltip so that the -10% armor is above the "on level up"portion. The indentation difference between the +1% speed and -10% armor is small enough that I read it as losing 10% armor on every level up.Replyoverboy8 days agoThanks a lot for the interesting insights!I nerfed HP, Lifesteal and Knockback using various techniques in the last update, along with many other changes.Just tested Pentagram/Adept and it works as expected: the 2 effects stack correctly as the wording impliedI reformatted Grimorius tooltip as you suggested ;)ReplyView more in threadBad Piggy11 days agoVery cool in it's current state. I love how much it really emphasises movement like how some active abilities need to be grabbed from around the arena to do themThat said, I think enemy projectiles could honestly stand out more. I could hardly see them at times in all the chaos.Still, I think this is a pretty solid base right now, and as always, you have a beautiful visual style, though I feel like the game suffers a little from how busy it can get. Great stuff so far thoughReplyThanks Bad Piggy! Really glad you’re enjoying the mechanics. I appreciate the feedback on projectile visibility and how busy things can get. I’ll definitely look into ways to improve those aspects. Really grateful for the kind words and thoughtful feedback!ReplyLeoLohandro11 days agoA copy of the brotato), but still fun.Replyoverboy11 days agoHey thanks a lot! Yes this game is a Brotato-like with many twists and new innovative mechanics, such as:- Equippable Boss Patterns- Minion Summoning- Growing Plant Minions with a watercan- Amount and Size stats - Physics-Based Weapons – like chained spikeballs- Kickable stuff- Playable character merge feature- Dozens and dozens of unique effectsI'm aiming for something like The Binding of Isaac meets Brotato — a deep, replayable experience full of chaotic synergies and wild builds that feel totally unique each run, with all the "being a boss fantasy and humor" deeply included in the mechanics and content :)Reply
    #noobs #are #coming #demo #free
    NOOBS ARE COMING (Demo) [Free] [Action] [Windows] [Linux]
    SirCozyCrow5 hours agoThe sound track is PEAK! I loved playing this, and my partner who normally doesn't play games like this one had a good time as well. I enjoyed the learning curve and I can't wait to play the harder difficulties.Here's a video I made, my partner jumped in for a few minutes as well.Replyso funReplyDrew.a.Chain1 day agoVery addictive!ReplyTrashpanda1191 day agolove the playstyle and the art style definitly fun to play plus the music is the cherry on topReplyAhoOppai1 day agoreally fun game cant wait for the full gameReplyDin Xavier coding1 day agoI chose the laser eye. How do I turn the attack around? Can I even do that?Replyoverboy1 day agoHey, the laser eye gets a random direction at the start of each wave, it's one of the specificities of this attack ;)ReplyFort Kenmei1 day agoGameplay and Critique ;)Replyoverboy1 day agoThanks a lot for the awesome video and the feedback! :)ReplyTLGaby2 days agoJust to know browser progress keep getting reset.Replyoverboy1 day agoThanks for the report! Could it be due to some of your browser settings?Unfortunately, browser-based games can't always guarantee reliable local saves due to how browsers handle storage.To avoid this in the future, I recommend trying the downloadable version of the demo,  it provides a more stable environment for saving progress. :)Replyepic.Replyoleekconder2 days agoVery nice. Spent couple hours easy=) UPD: And some moreReplyMaximusR3 days agoes un juego que ya jugue en su momento cuando tenias menos cosas y ahora que esta actualizado quisiera grabarlo otra vezReplyEPIClove the spiders ♥ReplynineGardens3 days agoOkay so.... tried out a few things, and some Dev suggestions to report: Bigfoot is such a cool idea, and running around at that speed with like.... all THAT going on just gave me motion sickness.Summoner is hysterical fun. All hail spiders. Tomatoe's are pretty fun too.The Adept is so cool in theory, but... once you have the right build is a bit of a "standing still simulator"  Also, if you have totoms or other turrets, there's very much the question each round of "Will my circle spawn NEAR the totoms , or far from them "   I kind of wonder if the mage circle should like... fizzle out after 20 seconds and appear somewhere else. Just... something to give a bit more dynamism, and to make the original spawn point less critical.Okay: added thoughts:Watering psycotic tomatoes feels great.Being a malevolent spider with 8 arms feels amazing. Feels very good and natural."Orbital" is one of the greatest and most fun abilities in the game.  I would take this even without the damage boost.Lots of fun, but also very silly. Good job.Replydave99993 days agowith some size you can kick the totems around to reposition them towards your circle, it benefits them too, adept can choose the wand at the start and with it you have no sustain problem anyway whatever build you want to set upReplynineGardens3 days agoOh damn- only just found out you can kick the totems!Okay, yeah in this case all is well. Or at least.... I still think a moving circle could be cool, but the fact that you can move your totems over to where the circle is makes things much better.Replyjust get enough amount+size and they hit everything, bounce is overkill ReplyLost track of time 10 hours in and still hooked. Absolutely love it! Can't wait for the full releaseReplyDriftedVoid4 days agoPretty good! ReplyIndyot4 days agoIt's a pretty addictive game, congrats! I lowkey missed a bit of satisfaction on the weapons though.ReplyCongrats on the game! I really like the weapons that you interact with which gives it a fun spin.Reply1Soultaken4 days agoAnyone know good combos for the items?Replydave99994 days agolasers plus amount+adept some arcane for basic dmgtotems +amount+ bounce+adept optional size and arcane you can stand still in the endall shovels with crit, strength their extra souls help you snowball hard and easy probably the most straightforward and stable very good build you can beat the game with nearly anything its well balanced but this one is very strong and easy soul flask, more chests are near always must pick, the high luck value ones give you better items the free reroll is a must pick, lightning dagger is somewhat unique as it  can carry you the entire early game even if you do not get enough element damageReplydave99998 days agounderestimated totems Replylimey8 days agoi like how you made like MULTITUDES of updates on this so like as soon as i check my feed its just thisReplydave99998 days agomy best run so far,  there s a hidden mechanic that  makes weapons  you have more likely to drop?Replyoverboy8 days agoLmao, awesome — looks like a really fun build to play! Yeah, Shop RNG uses a lot of hidden tricks to help you find relevant attacks, while still allowing unrelated ones to appear. That way, you can discover unique builds and experiment freely!Replyoverboy8 days agoThank you so much for the incredible reception of the web demo on Itch, and to everyone who wishlisted the game! Many of the changes—along with much more to come in future updates—are directly based on your feedback here and on the game’s Discord. I’m also excited to announce that the game will release on Steam on 8 July 2025! Demo - Update 35Singleplayer UI: Level Up Upgrade Phase and Chest Pickup Phase UI now display the items and attacks inventoriesSingleplayer Shop: subtle animation while selecting a Buy Button Many Balancing tweaks Balancing: nerfed Life Steal in various waysBalancing: nerfed Knockback in various waysBalancing: too much items enhancing HP Max were put in the Demo, this means it was easier to get a lot of HP and to survive in the Demo due to higher ratio of items providing HP Added a subtle duration during which the player can still pickup Souls even if they’re slurped by the Soul Portal Fine tuned the color of some weapons to improve the visibility Balancing: Ballista don’t double their projectiles based on amount anymoreIf Player HP is Full and HP Max > 20, the player can’t be one-shot Bugfix: in-game achievement pop up could be displayed below other UI elements while it should always be above everything else Potential Bugfix for a rare bug happening in Multiplayer shop where player2 Shop sections wasn’t displayed at allRework the save system in preparation for upcoming features ReplyxHELLO_WORLDx10 days agocontracts on the gameReplydave999910 days agoelijah_ap10 days agoLove the art style, upgrades, controls, etc. Balance might be the only thing off about this. If you were to add anything, I would want to see more variety in the stages, similar to Vampire Survivor. Otherwise- really great.ReplyThank you so much! I’ll keep working on the balance with each update, and I appreciate the suggestion on stage variety!ReplyNetsmile10 days agoTorch IV has a problem rounding numbers in the stats hover over display. Other levels of torches workReplyoverboy10 days agoThanks, I'll fix this displayed rounding number issue soon!ReplySkeppartorsk10 days agoFor now I'd say it's fun, but lacking a bit in balance. I absolutely suck at brotatolikes. But find this one easy, so it's probably undertuned as far as difficulty is concerned. The power and availability of HP and regen items, makes you just literally not care if you get hit. Then the relatively strong armor on top and you're just too tanky for anything to feasibly ever kill you.Replyoverboy10 days agoThanks for the feedback! Sounds like tanky builds might be a bit too forgiving right now, i'll do some balancing changesReplySkeppartorsk9 days agoLife steal has similar issues too. There's also the standard issue with knockback in these kinds of games. The lack of any enemy resistance/diminishing returns, means it's way too easy to get enough knockback that enemies cannot touch you anymore. Ranged attacks are too few and far between to worry about with the current levels of sustain. Meaning you can just Stand Still and Kill way too realiably. Edit: Lategame with 6x Wands I'm getting so much screen shake it's triggering simulation sickness. It was due to having Pierce + Bounce. The screen shake from my projectiles bouncing off the edge of the map.Replyoverboy8 days agothanks for your feedback, it will help for the game balancing!For now I try to avoid diminishing returns by design to make sure each feature and stat is super easy to understand because I dislike when roguelike gets too opaque, I prefer that the player fully and easily undestand each of its choices, but yeah that involves a good balance to find!In future updates, Life Steal will become harder to get, Knockback will be capped at lower maximum applied values.Regarding the overall difficulty, the full version has 3 extra level of difficulties, and based on some feedbacks i have from beta testers, the balance between the 5 difficulty modes seem to be close to what i'm aiming forThere is already an option to disable screenshakes ;)Edit: Would you be interested to join the beta-test of the full game? If so please join the Discord and ping me in DM ;)ReplySkeppartorsk8 days agoI did notice that you could turn off screen shake entirely. But admittedly a lot of the visceral feel of the combat goes away when you fully disable the screen shake. But when you have too many Leeroy/knockback projectiles/bouncing projectiles. It just reaches the point where simulation sickness sets in. Wish there was something like an intensity setting, or a way for it to cap out at how often a screen shake can get triggered. I agree on the opaque thing. But I was more thinking something akin to how CC Diminishing Returns works in WoW. Where 1st hit = full value, 2nd hit within 10s = half value, 3rd hit = 1/4 value. Then 10s of immunity before it resets. That way you still get knockback when you pick knockback. But you can't just perma nail enemies against the wall. Edit: Also there's a wording issuewith how multiple pentagrams work. If you have adept pentagram and the item pentagram the wording is "when you stand inside a pentagram" But the item one gives the 20% damage ONLY and the adept one gives the adept bonuses ONLY. The wording would mean that both pentagrams should give adept bonus AND 20% damage bonus.Edit2: I'd suggest reformatting Grimorius tooltip so that the -10% armor is above the "on level up"portion. The indentation difference between the +1% speed and -10% armor is small enough that I read it as losing 10% armor on every level up.Replyoverboy8 days agoThanks a lot for the interesting insights!I nerfed HP, Lifesteal and Knockback using various techniques in the last update, along with many other changes.Just tested Pentagram/Adept and it works as expected: the 2 effects stack correctly as the wording impliedI reformatted Grimorius tooltip as you suggested ;)ReplyView more in threadBad Piggy11 days agoVery cool in it's current state. I love how much it really emphasises movement like how some active abilities need to be grabbed from around the arena to do themThat said, I think enemy projectiles could honestly stand out more. I could hardly see them at times in all the chaos.Still, I think this is a pretty solid base right now, and as always, you have a beautiful visual style, though I feel like the game suffers a little from how busy it can get. Great stuff so far thoughReplyThanks Bad Piggy! Really glad you’re enjoying the mechanics. I appreciate the feedback on projectile visibility and how busy things can get. I’ll definitely look into ways to improve those aspects. Really grateful for the kind words and thoughtful feedback!ReplyLeoLohandro11 days agoA copy of the brotato), but still fun.Replyoverboy11 days agoHey thanks a lot! Yes this game is a Brotato-like with many twists and new innovative mechanics, such as:- Equippable Boss Patterns- Minion Summoning- Growing Plant Minions with a watercan- Amount and Size stats - Physics-Based Weapons – like chained spikeballs- Kickable stuff- Playable character merge feature- Dozens and dozens of unique effectsI'm aiming for something like The Binding of Isaac meets Brotato — a deep, replayable experience full of chaotic synergies and wild builds that feel totally unique each run, with all the "being a boss fantasy and humor" deeply included in the mechanics and content :)Reply #noobs #are #coming #demo #free
    OVERBOY.ITCH.IO
    NOOBS ARE COMING (Demo) [Free] [Action] [Windows] [Linux]
    SirCozyCrow5 hours agoThe sound track is PEAK! I loved playing this, and my partner who normally doesn't play games like this one had a good time as well. I enjoyed the learning curve and I can't wait to play the harder difficulties.Here's a video I made, my partner jumped in for a few minutes as well.Replyso funReplyDrew.a.Chain1 day ago(+1)Very addictive!ReplyTrashpanda1191 day ago(+1)love the playstyle and the art style definitly fun to play plus the music is the cherry on topReplyAhoOppai1 day ago(+1)really fun game cant wait for the full gameReplyDin Xavier coding1 day agoI chose the laser eye. How do I turn the attack around? Can I even do that?Replyoverboy1 day agoHey, the laser eye gets a random direction at the start of each wave, it's one of the specificities of this attack ;)ReplyFort Kenmei1 day agoGameplay and Critique ;)Replyoverboy1 day ago(+1)Thanks a lot for the awesome video and the feedback! :)ReplyTLGaby2 days agoJust to know browser progress keep getting reset.Replyoverboy1 day ago (2 edits) (+1)Thanks for the report! Could it be due to some of your browser settings?Unfortunately, browser-based games can't always guarantee reliable local saves due to how browsers handle storage.To avoid this in the future, I recommend trying the downloadable version of the demo,  it provides a more stable environment for saving progress. :)Replyepic.Replyoleekconder2 days ago (1 edit) (+1)Very nice. Spent couple hours easy=) UPD: And some moreReplyMaximusR3 days agoes un juego que ya jugue en su momento cuando tenias menos cosas y ahora que esta actualizado quisiera grabarlo otra vezReplyEPIClove the spiders ♥ReplynineGardens3 days ago (1 edit) (+2)Okay so.... tried out a few things, and some Dev suggestions to report: Bigfoot is such a cool idea, and running around at that speed with like.... all THAT going on just gave me motion sickness.Summoner is hysterical fun. All hail spiders. Tomatoe's are pretty fun too.The Adept is so cool in theory, but... once you have the right build is a bit of a "standing still simulator"  Also, if you have totoms or other turrets, there's very much the question each round of "Will my circle spawn NEAR the totoms (instant win), or far from them (oh no)"   I kind of wonder if the mage circle should like... fizzle out after 20 seconds and appear somewhere else. Just... something to give a bit more dynamism, and to make the original spawn point less critical.Okay: added thoughts:Watering psycotic tomatoes feels great.Being a malevolent spider with 8 arms feels amazing. Feels very good and natural."Orbital" is one of the greatest and most fun abilities in the game.  I would take this even without the damage boost.Lots of fun, but also very silly. Good job.Replydave99993 days agowith some size you can kick the totems around to reposition them towards your circle, it benefits them too, adept can choose the wand at the start and with it you have no sustain problem anyway whatever build you want to set upReplynineGardens3 days agoOh damn- only just found out you can kick the totems!Okay, yeah in this case all is well. Or at least.... I still think a moving circle could be cool, but the fact that you can move your totems over to where the circle is makes things much better.Replyjust get enough amount+size and they hit everything, bounce is overkill ReplyLost track of time 10 hours in and still hooked. Absolutely love it! Can't wait for the full releaseReplyDriftedVoid4 days agoPretty good! ReplyIndyot4 days agoIt's a pretty addictive game, congrats! I lowkey missed a bit of satisfaction on the weapons though.ReplyCongrats on the game! I really like the weapons that you interact with which gives it a fun spin. (i.e. the spike ball)Reply1Soultaken4 days agoAnyone know good combos for the items? (I just pick randomly.)Replydave99994 days ago (1 edit) (+2)lasers plus amount+adept some arcane for basic dmg (its instable to setup and only overboy starts with one) totems +amount+ bounce+adept optional size and arcane you can stand still in the endall shovels with crit, strength their extra souls help you snowball hard and easy probably the most straightforward and stable very good build you can beat the game with nearly anything its well balanced but this one is very strong and easy (realized in the end that all size was wasted on this) soul flask, more chests are near always must pick, the high luck value ones give you better items the free reroll is a must pick, lightning dagger is somewhat unique as it  can carry you the entire early game even if you do not get enough element damage (I understand that the more gimmicky things like pets and kickables give the game versatility but to min max they are not that competative)Replydave99998 days agounderestimated totems Replylimey8 days agoi like how you made like MULTITUDES of updates on this so like as soon as i check my feed its just thisReplydave99998 days ago (1 edit) (+1)my best run so far,  there s a hidden mechanic that  makes weapons  you have more likely to drop?Replyoverboy8 days ago(+2)Lmao, awesome — looks like a really fun build to play! Yeah, Shop RNG uses a lot of hidden tricks to help you find relevant attacks, while still allowing unrelated ones to appear. That way, you can discover unique builds and experiment freely!Replyoverboy8 days ago (1 edit) Thank you so much for the incredible reception of the web demo on Itch, and to everyone who wishlisted the game! Many of the changes—along with much more to come in future updates—are directly based on your feedback here and on the game’s Discord. I’m also excited to announce that the game will release on Steam on 8 July 2025! Demo - Update 35 (06 June 2025)Singleplayer UI: Level Up Upgrade Phase and Chest Pickup Phase UI now display the items and attacks inventories (useful to check the scaling of current equipped attacks for example) Singleplayer Shop: subtle animation while selecting a Buy Button Many Balancing tweaks Balancing: nerfed Life Steal in various ways (lower values gained from items) Balancing: nerfed Knockback in various ways (lower values gained, higher item rarity, lower max applied value) Balancing: too much items enhancing HP Max were put in the Demo, this means it was easier to get a lot of HP and to survive in the Demo due to higher ratio of items providing HP Added a subtle duration during which the player can still pickup Souls even if they’re slurped by the Soul Portal Fine tuned the color of some weapons to improve the visibility Balancing: Ballista don’t double their projectiles based on amount anymore (only number of ballistas scales with amount) If Player HP is Full and HP Max > 20, the player can’t be one-shot Bugfix: in-game achievement pop up could be displayed below other UI elements while it should always be above everything else Potential Bugfix for a rare bug happening in Multiplayer shop where player2 Shop sections wasn’t displayed at allRework the save system in preparation for upcoming features ReplyxHELLO_WORLDx10 days agocontracts on the gameReplydave999910 days agoelijah_ap10 days agoLove the art style, upgrades, controls, etc. Balance might be the only thing off about this. If you were to add anything, I would want to see more variety in the stages, similar to Vampire Survivor. Otherwise- really great.ReplyThank you so much! I’ll keep working on the balance with each update, and I appreciate the suggestion on stage variety!ReplyNetsmile10 days agoTorch IV has a problem rounding numbers in the stats hover over display. Other levels of torches workReplyoverboy10 days ago (1 edit) Thanks, I'll fix this displayed rounding number issue soon!ReplySkeppartorsk10 days agoFor now I'd say it's fun, but lacking a bit in balance. I absolutely suck at brotatolikes. But find this one easy, so it's probably undertuned as far as difficulty is concerned. The power and availability of HP and regen items, makes you just literally not care if you get hit. Then the relatively strong armor on top and you're just too tanky for anything to feasibly ever kill you.Replyoverboy10 days ago (1 edit) (+1)Thanks for the feedback! Sounds like tanky builds might be a bit too forgiving right now, i'll do some balancing changesReplySkeppartorsk9 days ago (2 edits) Life steal has similar issues too. There's also the standard issue with knockback in these kinds of games. The lack of any enemy resistance/diminishing returns, means it's way too easy to get enough knockback that enemies cannot touch you anymore. Ranged attacks are too few and far between to worry about with the current levels of sustain. Meaning you can just Stand Still and Kill way too realiably. Edit: Lategame with 6x Wands I'm getting so much screen shake it's triggering simulation sickness. It was due to having Pierce + Bounce. The screen shake from my projectiles bouncing off the edge of the map.Replyoverboy8 days ago (2 edits) (+1)thanks for your feedback, it will help for the game balancing!For now I try to avoid diminishing returns by design to make sure each feature and stat is super easy to understand because I dislike when roguelike gets too opaque, I prefer that the player fully and easily undestand each of its choices, but yeah that involves a good balance to find!In future updates, Life Steal will become harder to get, Knockback will be capped at lower maximum applied values.Regarding the overall difficulty, the full version has 3 extra level of difficulties, and based on some feedbacks i have from beta testers, the balance between the 5 difficulty modes seem to be close to what i'm aiming for (minus some issues like you pointed out, and of course some balancing required on specific builds and items)There is already an option to disable screenshakes ;)Edit: Would you be interested to join the beta-test of the full game? If so please join the Discord and ping me in DM ;)ReplySkeppartorsk8 days ago (4 edits) I did notice that you could turn off screen shake entirely. But admittedly a lot of the visceral feel of the combat goes away when you fully disable the screen shake. But when you have too many Leeroy/knockback projectiles/bouncing projectiles. It just reaches the point where simulation sickness sets in. Wish there was something like an intensity setting, or a way for it to cap out at how often a screen shake can get triggered. I agree on the opaque thing. But I was more thinking something akin to how CC Diminishing Returns works in WoW. Where 1st hit = full value, 2nd hit within 10s = half value, 3rd hit = 1/4 value. Then 10s of immunity before it resets. That way you still get knockback when you pick knockback. But you can't just perma nail enemies against the wall. Edit: Also there's a wording issue (or a bug) with how multiple pentagrams work. If you have adept pentagram and the item pentagram the wording is "when you stand inside a pentagram" But the item one gives the 20% damage ONLY and the adept one gives the adept bonuses ONLY. The wording would mean that both pentagrams should give adept bonus AND 20% damage bonus.Edit2: I'd suggest reformatting Grimorius tooltip so that the -10% armor is above the "on level up"portion. The indentation difference between the +1% speed and -10% armor is small enough that I read it as losing 10% armor on every level up.Replyoverboy8 days agoThanks a lot for the interesting insights!I nerfed HP, Lifesteal and Knockback using various techniques in the last update, along with many other changes.Just tested Pentagram/Adept and it works as expected: the 2 effects stack correctly as the wording impliedI reformatted Grimorius tooltip as you suggested ;)ReplyView more in threadBad Piggy11 days agoVery cool in it's current state. I love how much it really emphasises movement like how some active abilities need to be grabbed from around the arena to do themThat said, I think enemy projectiles could honestly stand out more. I could hardly see them at times in all the chaos.Still, I think this is a pretty solid base right now, and as always, you have a beautiful visual style, though I feel like the game suffers a little from how busy it can get. Great stuff so far thoughReplyThanks Bad Piggy! Really glad you’re enjoying the mechanics. I appreciate the feedback on projectile visibility and how busy things can get. I’ll definitely look into ways to improve those aspects. Really grateful for the kind words and thoughtful feedback!ReplyLeoLohandro11 days agoA copy of the brotato), but still fun.Replyoverboy11 days ago (2 edits) (+1)Hey thanks a lot! Yes this game is a Brotato-like with many twists and new innovative mechanics, such as:- Equippable Boss Patterns (active skills you can trigger by picking orbs on the map)- Minion Summoning- Growing Plant Minions with a watercan- Amount and Size stats - Physics-Based Weapons – like chained spikeballs- Kickable stuff (you can even play soccer with your minions or other co-op players)- Playable character merge feature (get the effect of 2 different characters or more at the same time)- Dozens and dozens of unique effects (turning enemies into Sheep, or Golden Statues, or both?)I'm aiming for something like The Binding of Isaac meets Brotato — a deep, replayable experience full of chaotic synergies and wild builds that feel totally unique each run, with all the "being a boss fantasy and humor" deeply included in the mechanics and content :)Reply
    0 Комментарии 0 Поделились
  • Mock up a website in five prompts

    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box.
    Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website.
    Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe
    Core pages
    Primary goal
    Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon.
    2. Home, About, Pricing / Subscription Box, Menu.
    3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.”
    4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for:
    • Home
    • About
    • Pricing
    • Menu
    Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon.
    The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today.
    #mock #website #five #prompts
    Mock up a website in five prompts
    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box. Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website. Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe Core pages Primary goal Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon. 2. Home, About, Pricing / Subscription Box, Menu. 3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.” 4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for: • Home • About • Pricing • Menu Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon. The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today. #mock #website #five #prompts
    WWW.BUILDER.IO
    Mock up a website in five prompts
    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box. Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website. Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe Core pages Primary goal Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon. 2. Home, About, Pricing / Subscription Box, Menu (with daily specials). 3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.” 4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.(Free tier Builder gives you 5 AI credits/day and 25/month—plenty to follow along with today’s demo. Upgrade only when you need it.)An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for: • Home • About • Pricing • Menu Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon. The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.(You can grab our design-to-code guide for a lot more ideas of what this can help you accomplish.)Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today.
    0 Комментарии 0 Поделились
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Комментарии 0 Поделились
  • My unexpected Pride icon: Link from the Zelda games, a non-binary hero who helped me work out who I was

    Growing up steeped in the aggressive gender stereotypes of the 1990s was a real trip for most queer millennials, but I think gamers had it especially hard. Almost all video game characters were hypermasculine military men, unrealistically curvaceous fantasy women wearing barely enough armour to cover their nipples, or cartoon animals. Most of these characters catered exclusively to straight teenage boys; overt queer representation in games was pretty much nonexistent until the mid 2010s. Before that, we had to take what we could get. And what I had was Link, from The Legend of Zelda.Link. Composite: Guardian Design; Zuma Press/AlamyLink is a boy, but he didn’t really look like one. He wore a green tunic and a serious expression under a mop of blond hair. He is the adventurous, mostly silent hero of the Zelda games, unassuming and often vulnerable, but also resourceful, daring and handy with a sword. In most of the early Zelda games, he is a kid of about 10, but even when he grew into a teenager in 1998’s Ocarina of Time on the Nintendo 64, he didn’t become a furious lump of muscle. He stayed androgynous, in his tunic and tights. As a kid, I would dress up like him for Halloween, carefully centre-parting my blond fringe. Link may officially be a boy, but for me he has always been a non-binary icon.As time has gone on and game graphics have evolved, Link has stayed somewhat gender-ambiguous. Gay guys and gender-fluid types alike appreciate his ageless twink energy. And given the total lack of thought that most game developers gave to players who weren’t straight and male, I felt vindicated when I found out that this was intentional. In 2016, the Zelda series’ producer Eiji Aonuma told Time magazine that the development team had experimented a little with Link’s gender presentation over the years, but that he felt that the character’s androgyny was part of who he was.“back during the Ocarina of Time days, I wanted Link to be gender neutral,” he said. “I wanted the player to think: ‘Maybe Link is a boy or a girl.’ If you saw Link as a guy, he’d have more of a feminine touch. Or vice versa … I’ve always thought that for either female or male players, I wanted them to be able to relate to Link.”As it turns out, Link appeals perhaps most of all to those of us somewhere in between. In 2023, the tech blog io9 spoke to many transgender and non-binary people who saw something of themselves in Link: he has acquired a reputation as an egg-cracker, a fictional character who prompts a realisation about your own gender identity.Despite their outdated reputation as a pursuit for adolescent boys, video games have always been playgrounds for gender experimentation and expression. There are legions of trans, non-binary and gender non-conforming people who first started exploring their identity with customisable game characters in World of Warcraft, or gender-swapping themselves in The Sims – the digital equivalent of dressing up. Video games are the closest you can come to stepping into a new body for a bit and seeing how it feels.It is no surprise to me that a lot of queer people are drawn to video games. A 2024 survey by GLAAD found that 17% of gamers identify as LGBTQ+, a huge number compared with the general population. It may be because people who play games skew younger – 40 and below – but I also think it’s because gender is all about play. What fun it is to mess with the rules, subvert people’s expectations and create your own character. It is as empowering as any world-saving quest.
    #unexpected #pride #icon #link #zelda
    My unexpected Pride icon: Link from the Zelda games, a non-binary hero who helped me work out who I was
    Growing up steeped in the aggressive gender stereotypes of the 1990s was a real trip for most queer millennials, but I think gamers had it especially hard. Almost all video game characters were hypermasculine military men, unrealistically curvaceous fantasy women wearing barely enough armour to cover their nipples, or cartoon animals. Most of these characters catered exclusively to straight teenage boys; overt queer representation in games was pretty much nonexistent until the mid 2010s. Before that, we had to take what we could get. And what I had was Link, from The Legend of Zelda.Link. Composite: Guardian Design; Zuma Press/AlamyLink is a boy, but he didn’t really look like one. He wore a green tunic and a serious expression under a mop of blond hair. He is the adventurous, mostly silent hero of the Zelda games, unassuming and often vulnerable, but also resourceful, daring and handy with a sword. In most of the early Zelda games, he is a kid of about 10, but even when he grew into a teenager in 1998’s Ocarina of Time on the Nintendo 64, he didn’t become a furious lump of muscle. He stayed androgynous, in his tunic and tights. As a kid, I would dress up like him for Halloween, carefully centre-parting my blond fringe. Link may officially be a boy, but for me he has always been a non-binary icon.As time has gone on and game graphics have evolved, Link has stayed somewhat gender-ambiguous. Gay guys and gender-fluid types alike appreciate his ageless twink energy. And given the total lack of thought that most game developers gave to players who weren’t straight and male, I felt vindicated when I found out that this was intentional. In 2016, the Zelda series’ producer Eiji Aonuma told Time magazine that the development team had experimented a little with Link’s gender presentation over the years, but that he felt that the character’s androgyny was part of who he was.“back during the Ocarina of Time days, I wanted Link to be gender neutral,” he said. “I wanted the player to think: ‘Maybe Link is a boy or a girl.’ If you saw Link as a guy, he’d have more of a feminine touch. Or vice versa … I’ve always thought that for either female or male players, I wanted them to be able to relate to Link.”As it turns out, Link appeals perhaps most of all to those of us somewhere in between. In 2023, the tech blog io9 spoke to many transgender and non-binary people who saw something of themselves in Link: he has acquired a reputation as an egg-cracker, a fictional character who prompts a realisation about your own gender identity.Despite their outdated reputation as a pursuit for adolescent boys, video games have always been playgrounds for gender experimentation and expression. There are legions of trans, non-binary and gender non-conforming people who first started exploring their identity with customisable game characters in World of Warcraft, or gender-swapping themselves in The Sims – the digital equivalent of dressing up. Video games are the closest you can come to stepping into a new body for a bit and seeing how it feels.It is no surprise to me that a lot of queer people are drawn to video games. A 2024 survey by GLAAD found that 17% of gamers identify as LGBTQ+, a huge number compared with the general population. It may be because people who play games skew younger – 40 and below – but I also think it’s because gender is all about play. What fun it is to mess with the rules, subvert people’s expectations and create your own character. It is as empowering as any world-saving quest. #unexpected #pride #icon #link #zelda
    WWW.THEGUARDIAN.COM
    My unexpected Pride icon: Link from the Zelda games, a non-binary hero who helped me work out who I was
    Growing up steeped in the aggressive gender stereotypes of the 1990s was a real trip for most queer millennials, but I think gamers had it especially hard. Almost all video game characters were hypermasculine military men, unrealistically curvaceous fantasy women wearing barely enough armour to cover their nipples, or cartoon animals. Most of these characters catered exclusively to straight teenage boys (or, I guess, furries); overt queer representation in games was pretty much nonexistent until the mid 2010s. Before that, we had to take what we could get. And what I had was Link, from The Legend of Zelda.Link. Composite: Guardian Design; Zuma Press/AlamyLink is a boy, but he didn’t really look like one. He wore a green tunic and a serious expression under a mop of blond hair. He is the adventurous, mostly silent hero of the Zelda games, unassuming and often vulnerable, but also resourceful, daring and handy with a sword. In most of the early Zelda games, he is a kid of about 10, but even when he grew into a teenager in 1998’s Ocarina of Time on the Nintendo 64, he didn’t become a furious lump of muscle. He stayed androgynous, in his tunic and tights. As a kid, I would dress up like him for Halloween, carefully centre-parting my blond fringe. Link may officially be a boy, but for me he has always been a non-binary icon.As time has gone on and game graphics have evolved, Link has stayed somewhat gender-ambiguous. Gay guys and gender-fluid types alike appreciate his ageless twink energy. And given the total lack of thought that most game developers gave to players who weren’t straight and male, I felt vindicated when I found out that this was intentional. In 2016, the Zelda series’ producer Eiji Aonuma told Time magazine that the development team had experimented a little with Link’s gender presentation over the years, but that he felt that the character’s androgyny was part of who he was.“[Even] back during the Ocarina of Time days, I wanted Link to be gender neutral,” he said. “I wanted the player to think: ‘Maybe Link is a boy or a girl.’ If you saw Link as a guy, he’d have more of a feminine touch. Or vice versa … I’ve always thought that for either female or male players, I wanted them to be able to relate to Link.”As it turns out, Link appeals perhaps most of all to those of us somewhere in between. In 2023, the tech blog io9 spoke to many transgender and non-binary people who saw something of themselves in Link: he has acquired a reputation as an egg-cracker, a fictional character who prompts a realisation about your own gender identity.Despite their outdated reputation as a pursuit for adolescent boys, video games have always been playgrounds for gender experimentation and expression. There are legions of trans, non-binary and gender non-conforming people who first started exploring their identity with customisable game characters in World of Warcraft, or gender-swapping themselves in The Sims – the digital equivalent of dressing up. Video games are the closest you can come to stepping into a new body for a bit and seeing how it feels.It is no surprise to me that a lot of queer people are drawn to video games. A 2024 survey by GLAAD found that 17% of gamers identify as LGBTQ+, a huge number compared with the general population. It may be because people who play games skew younger – 40 and below – but I also think it’s because gender is all about play. What fun it is to mess with the rules, subvert people’s expectations and create your own character. It is as empowering as any world-saving quest.
    0 Комментарии 0 Поделились
  • JSWD extends 1960s town hall with interlocking structures and perforated façade in Germany

    Submitted by WA Contents
    JSWD extends 1960s town hall with interlocking structures and perforated façade in Germany

    Germany Architecture News - Jun 12, 2025 - 04:18  

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";
    Cologne-based architecture firm JSWD has extended a 1960s town hall with interlocking structures and perforated brick façade in Brühl, Germany.Called Brühl City Hall and Library, the 5,200-square-metre project includes a new construction of the library and refurbishment of the old City Hall.New entrance at the pedestrian zone. Image © Taufik KenanJSWD won the first prize in a competition to built this project in 2017. The competition's goals were to design a proposal for the nearby Janshof Square and to propose an addition to the current town hall. An expansion constructed in the 1960s had to be replaced as part of the renovation.Staggered gables of the new library with partially perforated brickwork. Image © Christa LachenmaierConnecting the new structure to the historic town hall and then refurbishing it in accordance with heritage regulations presented a unique task. The end product is an easily accessible, energy-efficient town hall that satisfies the most recent regulations. It is made to allow for flexible use and to connect different building functions to create synergies.Aerial photo with Brühl Castle in the backygroundAerial photo with Brühl Castle in the backyground. Image © Franco Casaccia / JSWDAbove the civil registry officesare the municipal authorities' offices. The new building has the municipal library on all floors, including a children's library in the basement that leads to a reading courtyard. The town hall is easy to find thanks to its clear signage. The pedestrian area and the now mostly car-free Janshof are also accessible from the new foyer. From here, the roads of tourists and pedestrians that enter the old structure through Markt meet.Aerial view: staggered gables of the new library in the middle of the town. Image © Schmitz.Reichard GmbHThe new structure in Brühl's old city center experiments with the concept of various urban areas and proportions. The front structure creates a cubature that is both distinctive and typical of the area by referencing the shape of the historic town hall. The brickwork is somewhat perforated to filter light entering the underneath windows, and the three interlocking structures are placed with their gables facing the nearby street. The new building's cubic impression is reinforced by the use of the same light-colored bricks for the roof and facade.Historical council chamber. Image © Christa LachenmaierThe project's goal is to be as sustainable as feasible. For instance, the company made every effort to preserve the old building's structure. Children's library at the reading courtyard. Image © Christa LachenmaierA combined heat and power plant provides both heat and energy. Concrete component activation ensures reduced energy consumption in addition to triple-glazed windows, abundant natural light, and exterior solar protection.Staircase in the listed city hall. Image © Franco Casaccia / JSWDLibrary room on the top floor. Image © Franco Casaccia / JSWDLarge dormer of the library. Image © Franco Casaccia / JSWDReading area in the dormer window. Image © Franco Casaccia / JSWDConnection of the new library to the listed town hall. Image © Taufik KenanThe listed city hall of Brühl, restored by JSWD. Image © Franco Casaccia / JSWDImage © Franco Casaccia / JSWDView of the inner courtyard of the library. Image © Franco Casaccia / JSWDLibrary dormers and staggered gables with partially perforated brickwork. Image © Franco Casaccia / JSWDSite planBasement floor planGround floor planFirst floor planSecond floor planThird floor planDetail drawingFaçade detailProject factsProject name: Brühl City Hall and LibraryProgram: New construction of the Library and Refurbishment of the old City HallLocation: Steinweg 1, 50321 Brühl, GermanyClient: City of BrühlArchitecture: JSWD, 1st prize competition 2017Completion: 2023Structural design: Kempen Krause Ingenieure AachenBuilding service engineering: DEERNSLibrary and interior planning: UKW Innenarchitekten, KrefeldLandscape: RMPSL, BonnSite: 4,800m2GFA: 5,200m2The top image in the article: New library of Brühl, Entrance from the Janshof. Image © Taufik Kenan. All drawings © JSWD.> via JSWD
    #jswd #extends #1960s #town #hall
    JSWD extends 1960s town hall with interlocking structures and perforated façade in Germany
    Submitted by WA Contents JSWD extends 1960s town hall with interlocking structures and perforated façade in Germany Germany Architecture News - Jun 12, 2025 - 04:18   html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Cologne-based architecture firm JSWD has extended a 1960s town hall with interlocking structures and perforated brick façade in Brühl, Germany.Called Brühl City Hall and Library, the 5,200-square-metre project includes a new construction of the library and refurbishment of the old City Hall.New entrance at the pedestrian zone. Image © Taufik KenanJSWD won the first prize in a competition to built this project in 2017. The competition's goals were to design a proposal for the nearby Janshof Square and to propose an addition to the current town hall. An expansion constructed in the 1960s had to be replaced as part of the renovation.Staggered gables of the new library with partially perforated brickwork. Image © Christa LachenmaierConnecting the new structure to the historic town hall and then refurbishing it in accordance with heritage regulations presented a unique task. The end product is an easily accessible, energy-efficient town hall that satisfies the most recent regulations. It is made to allow for flexible use and to connect different building functions to create synergies.Aerial photo with Brühl Castle in the backygroundAerial photo with Brühl Castle in the backyground. Image © Franco Casaccia / JSWDAbove the civil registry officesare the municipal authorities' offices. The new building has the municipal library on all floors, including a children's library in the basement that leads to a reading courtyard. The town hall is easy to find thanks to its clear signage. The pedestrian area and the now mostly car-free Janshof are also accessible from the new foyer. From here, the roads of tourists and pedestrians that enter the old structure through Markt meet.Aerial view: staggered gables of the new library in the middle of the town. Image © Schmitz.Reichard GmbHThe new structure in Brühl's old city center experiments with the concept of various urban areas and proportions. The front structure creates a cubature that is both distinctive and typical of the area by referencing the shape of the historic town hall. The brickwork is somewhat perforated to filter light entering the underneath windows, and the three interlocking structures are placed with their gables facing the nearby street. The new building's cubic impression is reinforced by the use of the same light-colored bricks for the roof and facade.Historical council chamber. Image © Christa LachenmaierThe project's goal is to be as sustainable as feasible. For instance, the company made every effort to preserve the old building's structure. Children's library at the reading courtyard. Image © Christa LachenmaierA combined heat and power plant provides both heat and energy. Concrete component activation ensures reduced energy consumption in addition to triple-glazed windows, abundant natural light, and exterior solar protection.Staircase in the listed city hall. Image © Franco Casaccia / JSWDLibrary room on the top floor. Image © Franco Casaccia / JSWDLarge dormer of the library. Image © Franco Casaccia / JSWDReading area in the dormer window. Image © Franco Casaccia / JSWDConnection of the new library to the listed town hall. Image © Taufik KenanThe listed city hall of Brühl, restored by JSWD. Image © Franco Casaccia / JSWDImage © Franco Casaccia / JSWDView of the inner courtyard of the library. Image © Franco Casaccia / JSWDLibrary dormers and staggered gables with partially perforated brickwork. Image © Franco Casaccia / JSWDSite planBasement floor planGround floor planFirst floor planSecond floor planThird floor planDetail drawingFaçade detailProject factsProject name: Brühl City Hall and LibraryProgram: New construction of the Library and Refurbishment of the old City HallLocation: Steinweg 1, 50321 Brühl, GermanyClient: City of BrühlArchitecture: JSWD, 1st prize competition 2017Completion: 2023Structural design: Kempen Krause Ingenieure AachenBuilding service engineering: DEERNSLibrary and interior planning: UKW Innenarchitekten, KrefeldLandscape: RMPSL, BonnSite: 4,800m2GFA: 5,200m2The top image in the article: New library of Brühl, Entrance from the Janshof. Image © Taufik Kenan. All drawings © JSWD.> via JSWD #jswd #extends #1960s #town #hall
    WORLDARCHITECTURE.ORG
    JSWD extends 1960s town hall with interlocking structures and perforated façade in Germany
    Submitted by WA Contents JSWD extends 1960s town hall with interlocking structures and perforated façade in Germany Germany Architecture News - Jun 12, 2025 - 04:18   html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Cologne-based architecture firm JSWD has extended a 1960s town hall with interlocking structures and perforated brick façade in Brühl, Germany.Called Brühl City Hall and Library, the 5,200-square-metre project includes a new construction of the library and refurbishment of the old City Hall.New entrance at the pedestrian zone. Image © Taufik KenanJSWD won the first prize in a competition to built this project in 2017. The competition's goals were to design a proposal for the nearby Janshof Square and to propose an addition to the current town hall. An expansion constructed in the 1960s had to be replaced as part of the renovation.Staggered gables of the new library with partially perforated brickwork. Image © Christa LachenmaierConnecting the new structure to the historic town hall and then refurbishing it in accordance with heritage regulations presented a unique task. The end product is an easily accessible, energy-efficient town hall that satisfies the most recent regulations. It is made to allow for flexible use and to connect different building functions to create synergies.Aerial photo with Brühl Castle in the backygroundAerial photo with Brühl Castle in the backyground. Image © Franco Casaccia / JSWDAbove the civil registry offices (Bürgeramt and Standesamt) are the municipal authorities' offices. The new building has the municipal library on all floors, including a children's library in the basement that leads to a reading courtyard. The town hall is easy to find thanks to its clear signage. The pedestrian area and the now mostly car-free Janshof are also accessible from the new foyer. From here, the roads of tourists and pedestrians that enter the old structure through Markt meet.Aerial view: staggered gables of the new library in the middle of the town. Image © Schmitz.Reichard GmbHThe new structure in Brühl's old city center experiments with the concept of various urban areas and proportions. The front structure creates a cubature that is both distinctive and typical of the area by referencing the shape of the historic town hall. The brickwork is somewhat perforated to filter light entering the underneath windows, and the three interlocking structures are placed with their gables facing the nearby street. The new building's cubic impression is reinforced by the use of the same light-colored bricks for the roof and facade.Historical council chamber. Image © Christa LachenmaierThe project's goal is to be as sustainable as feasible. For instance, the company made every effort to preserve the old building's structure. Children's library at the reading courtyard. Image © Christa LachenmaierA combined heat and power plant provides both heat and energy. Concrete component activation ensures reduced energy consumption in addition to triple-glazed windows, abundant natural light, and exterior solar protection.Staircase in the listed city hall. Image © Franco Casaccia / JSWDLibrary room on the top floor. Image © Franco Casaccia / JSWDLarge dormer of the library. Image © Franco Casaccia / JSWDReading area in the dormer window. Image © Franco Casaccia / JSWDConnection of the new library to the listed town hall. Image © Taufik KenanThe listed city hall of Brühl, restored by JSWD. Image © Franco Casaccia / JSWDImage © Franco Casaccia / JSWDView of the inner courtyard of the library. Image © Franco Casaccia / JSWDLibrary dormers and staggered gables with partially perforated brickwork. Image © Franco Casaccia / JSWDSite planBasement floor planGround floor planFirst floor planSecond floor planThird floor planDetail drawingFaçade detailProject factsProject name: Brühl City Hall and LibraryProgram: New construction of the Library and Refurbishment of the old City HallLocation: Steinweg 1, 50321 Brühl, GermanyClient: City of BrühlArchitecture: JSWD, 1st prize competition 2017Completion: 2023Structural design: Kempen Krause Ingenieure AachenBuilding service engineering: DEERNSLibrary and interior planning: UKW Innenarchitekten, KrefeldLandscape: RMPSL, BonnSite: 4,800m2GFA: 5,200m2The top image in the article: New library of Brühl, Entrance from the Janshof. Image © Taufik Kenan. All drawings © JSWD.> via JSWD
    0 Комментарии 0 Поделились
  • Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development.
    What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute. 
    As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention.
    Engineering around constraints
    DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement.
    While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well.
    This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment.
    If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development.
    That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently.
    This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing.
    Pragmatism over process
    Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process.
    The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content.
    This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations. 
    Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance.
    Market reverberations
    Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders.
    Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI. 
    With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change.
    This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s.
    Beyond model training
    Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training.
    To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards.
    The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk.
    For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted.
    At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort.
    This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails.
    Moving into the future
    So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity. 
    Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market.
    Meta has also responded,
    With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail.
    Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching.
    Jae Lee is CEO and co-founder of TwelveLabs.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #rethinking #deepseeks #playbook #shakes #highspend
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #rethinking #deepseeks #playbook #shakes #highspend
    VENTUREBEAT.COM
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere $6 million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent $500 million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just $5.6 million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate (even though it makes a good story). Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of experts (MoE) architectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending $7 to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending $7 billion or $8 billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive $40 billion funding round that valued the company at an unprecedented $300 billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute” (TTC). As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning” (SPCT). This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM” (generalist reward modeling). But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of others (think OpenAI’s “critique and revise” methods, Anthropic’s constitutional AI or research on self-rewarding agents) to create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately $80 billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Комментарии 0 Поделились
Расширенные страницы