• From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?

    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report.
    The deal surprised the industry as the two are seen as major AI rivals.
    Signs of friction between OpenAI and Microsoft may have also fueled the move.
    The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its B investment to boost its cloud computing capacity in 2025.

    In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs.
    The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft.
    Why the Deal Surprised the Tech Industry
    The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider.
    Since then, Google has launched Bardto compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product.

    A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost B off its market value within 24 hours as a result.
    Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024. 
    And then there’s this gem:

    With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice.
    In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed.
    It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft?
    In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception.
    Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy. 
    Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025.
    If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence. Defined as when OpenAI develops AI systems that generate B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to from B the previous year, this could happen sooner rather than later.
    While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months.
    In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March.
    The good news is Google is more than ready to deliver. Its parent company has earmarked B towards its investments in AI this year, which includes boosting its cloud computing capacity.

    In April, Google launched its 7th generation tensor processing unitcalled Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector.

    As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy.
    With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility.
    Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines.
    Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech. 
    He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom.
    That fascination with tech didn’t just stick. It evolved into a full-blown calling.
    After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career.
    He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy.
    His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers.
    At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap.
    Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual.
    As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting.
    From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it.

    View all articles by Cedric Solidon

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #rivals #partners #whats #with #google
    From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?
    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report. The deal surprised the industry as the two are seen as major AI rivals. Signs of friction between OpenAI and Microsoft may have also fueled the move. The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its B investment to boost its cloud computing capacity in 2025. In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs. The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft. Why the Deal Surprised the Tech Industry The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider. Since then, Google has launched Bardto compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product. A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost B off its market value within 24 hours as a result. Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024.  And then there’s this gem: With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice. In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed. It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft? In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception. Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy.  Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025. If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence. Defined as when OpenAI develops AI systems that generate B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to from B the previous year, this could happen sooner rather than later. While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months. In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March. The good news is Google is more than ready to deliver. Its parent company has earmarked B towards its investments in AI this year, which includes boosting its cloud computing capacity. In April, Google launched its 7th generation tensor processing unitcalled Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector. As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #rivals #partners #whats #with #google
    TECHREPORT.COM
    From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?
    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report. The deal surprised the industry as the two are seen as major AI rivals. Signs of friction between OpenAI and Microsoft may have also fueled the move. The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its $75B investment to boost its cloud computing capacity in 2025. In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs. The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft. Why the Deal Surprised the Tech Industry The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider. Since then, Google has launched Bard (now known as Gemini) to compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product. A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost $100B off its market value within 24 hours as a result. Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024.  And then there’s this gem: With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice. In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed. It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft? In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception. Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy.  Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025. If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence (AGI). Defined as when OpenAI develops AI systems that generate $100B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to $12.7 from $3.7B the previous year, this could happen sooner rather than later. While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months. In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March. The good news is Google is more than ready to deliver. Its parent company has earmarked $75B towards its investments in AI this year, which includes boosting its cloud computing capacity. In April, Google launched its 7th generation tensor processing unit (TPU) called Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector. As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • IT Pros ‘Extremely Worried’ About Shadow AI: Report

    IT Pros ‘Extremely Worried’ About Shadow AI: Report

    By John P. Mello Jr.
    June 4, 2025 5:00 AM PT

    ADVERTISEMENT
    Enterprise IT Lead Generation Services
    Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.

    Shadow AI — the use of AI tools under the radar of IT departments — has information technology directors and executives worried, according to a report released Tuesday.
    The report, based on a survey of 200 IT directors and executives at U.S. enterprise organizations of 1,000 employees or more, found nearly half the IT proswere “extremely worried” about shadow AI, and almost all of themwere concerned about it from a privacy and security viewpoint.
    “As our survey found, shadow AI is resulting in palpable, concerning outcomes, with nearly 80% of IT leaders saying it has resulted in negative incidents such as sensitive data leakage to Gen AI tools, false or inaccurate results, and legal risks of using copyrighted information,” said Krishna Subramanian, co-founder of Campbell, Calif.-based Komprise, the unstructured data management company that produced the report.
    “Alarmingly, 13% say that shadow AI has caused financial or reputational harm to their organizations,” she told TechNewsWorld.
    Subramanian added that shadow AI poses a much greater problem than shadow IT, which primarily focuses on departmental power users purchasing cloud instances or SaaS tools without obtaining IT approval.
    “Now we’ve got an unlimited number of employees using tools like ChatGPT or Claude AI to get work done, but not understanding the potential risk they are putting their organizations at by inadvertently submitting company secrets or customer data into the chat prompt,” she explained.
    “The data risk is large and growing in still unforeseen ways because of the pace of AI development and adoption and the fact that there is a lot we don’t know about how AI works,” she continued. “It is becoming more humanistic all the time and capable of making decisions independently.”
    Shadow AI Introduces Security Blind Spots
    Shadow AI is the next step after shadow IT and is a growing risk, noted James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.
    “Users use AI tools for content, images, or applications and to process sensitive data or company information without proper security checks,” he told TechNewsWorld. “Most organizations will have privacy, compliance, and data protection policies, and shadow AI introduces blind spots in the organization’s data loss prevention.”
    “The biggest risk with shadow AI is that the AI application has not passed through a security analysis as approved AI tools may have been,” explained Melissa Ruzzi, director of AI at AppOmni, a SaaS security management software company, in San Mateo, Calif.
    “Some AI applications may be training models using your data, may not adhere to relevant regulations that your company is required to follow, and may not even have the data storage security level you deem necessary to keep your data from being exposed,” she told TechNewsWorld. “Those risks are blind spots of potential security vulnerabilities in shadow AI.”
    Krishna Vishnubhotla, vice president of product strategy at Zimperium, a mobile security company based in Dallas, noted that shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways.
    “Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control,” he told TechNewsWorld. “This creates new security and compliance risks that are harder to track and mitigate.”
    Vishnubhotla added that the financial impact of shadow AI varies, but unauthorized AI tools can lead to significant regulatory fines, data breaches, and loss of intellectual property. “Depending on the scale of the agency and the sensitivity of the data exposed, the costs could range from millions to potentially billions in damages due to compliance violations, remediation efforts, and reputational harm,” he said.
    “Federal agencies handling vast amounts of sensitive or classified information, financial institutions, and health care organizations are particularly vulnerable,” he said. “These sectors collect and analyze vast amounts of high-value data, making AI tools attractive. But without proper vetting, these tools could be easily exploited.”
    Shadow AI Everywhere and Easy To Use
    Nicole Carignan, SVP for security and AI strategy at Darktrace, a global cybersecurity AI company, predicts an explosion of tools that utilize AI and generative AI within enterprises and on devices used by employees.
    “In addition to managing AI tools that are built in-house, security teams will see a surge in the volume of existing tools that have new AI features and capabilities embedded, as well as a rise in shadow AI,” she told TechNewsWorld. “If the surge remains unchecked, this raises serious questions and concerns about data loss prevention, as well as compliance concerns as new regulations start to take effect.”
    “That will drive an increasing need for AI asset discovery — the ability for companies to identify and track the use of AI systems throughout the enterprise,” she said. “It is imperative that CIOs and CISOs dig deep into new AI security solutions, asking comprehensive questions about data access and visibility.”
    Shadow AI has become so rampant because it is everywhere and easy to access through free tools, maintained Komprise’s Subramanian. “All you need is a web browser,” she said. “Enterprise users can inadvertently share company code snippets or corporate data when using these Gen AI tools, which could create data leakage.”
    “These tools are growing and changing exponentially,” she continued. “It’s really hard to keep up. As the IT leader, how do you track this and determine the risk? Managers might be looking the other way because their teams are getting more done. You may need fewer contractors and full-time employees. But I think the risk of the tools is not well understood.”
    “The low, or in some cases non-existent, learning curve associated with using Gen AI services has led to rapid adoption, regardless of prior experience with these services,” added Satyam Sinha, CEO and co-founder of Acuvity, a provider of runtime Gen AI security and governance solutions, in Sunnyvale, Calif.
    “Whereas shadow IT focused on addressing a specific challenge for particular employees or departments, shadow AI addresses multiple challenges for multiple employees and departments. Hence, the greater appeal,” he said. “The abundance and rapid development of Gen AI services also means employees can find the right solution. Of course, all these traits have direct security implications.”
    Banning AI Tools Backfires
    To support innovation while minimizing the threat of shadow AI, enterprises must take a three-pronged approach, asserted Kris Bondi, CEO and co-founder of Mimoto, a threat detection and response company in San Francisco. They must educate employees on the dangers of unsupported, unmonitored AI tools, create company protocols for what is not acceptable use of unauthorized AI tools, and, most importantly, provide AI tools that are sanctioned.
    “Explaining why one tool is sanctioned and another isn’t greatly increases compliance,” she told TechNewsWorld. “It does not work for a company to have a zero-use mandate. In fact, this results in an increase in stealth use of shadow AI.”
    In the very near future, more and more applications will be leveraging AI in different forms, so the reality of shadow AI will be present more than ever, added AppOmni’s Ruzzi. “The best strategy here is employee training and AI usage monitoring,” she said.
    “It will become crucial to have in place a powerful SaaS security tool that can go beyond detecting direct AI usage of chatbots to detect AI usage connected to other applications,” she continued, “allowing for early discovery, proper risk assessment, and containment to minimize possible negative consequences.”
    “Shadow AI is just the beginning,” KnowBe4’s McQuiggan added. “As more teams use AI, the risks grow.”
    He recommended that companies start small, identify what’s being used, and build from there. They should also get legal, HR, and compliance involved.
    “Make AI governance part of your broader security program,” he said. “The sooner you start, the better you can manage what comes next.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in IT Leadership
    #pros #extremely #worried #about #shadow
    IT Pros ‘Extremely Worried’ About Shadow AI: Report
    IT Pros ‘Extremely Worried’ About Shadow AI: Report By John P. Mello Jr. June 4, 2025 5:00 AM PT ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. Shadow AI — the use of AI tools under the radar of IT departments — has information technology directors and executives worried, according to a report released Tuesday. The report, based on a survey of 200 IT directors and executives at U.S. enterprise organizations of 1,000 employees or more, found nearly half the IT proswere “extremely worried” about shadow AI, and almost all of themwere concerned about it from a privacy and security viewpoint. “As our survey found, shadow AI is resulting in palpable, concerning outcomes, with nearly 80% of IT leaders saying it has resulted in negative incidents such as sensitive data leakage to Gen AI tools, false or inaccurate results, and legal risks of using copyrighted information,” said Krishna Subramanian, co-founder of Campbell, Calif.-based Komprise, the unstructured data management company that produced the report. “Alarmingly, 13% say that shadow AI has caused financial or reputational harm to their organizations,” she told TechNewsWorld. Subramanian added that shadow AI poses a much greater problem than shadow IT, which primarily focuses on departmental power users purchasing cloud instances or SaaS tools without obtaining IT approval. “Now we’ve got an unlimited number of employees using tools like ChatGPT or Claude AI to get work done, but not understanding the potential risk they are putting their organizations at by inadvertently submitting company secrets or customer data into the chat prompt,” she explained. “The data risk is large and growing in still unforeseen ways because of the pace of AI development and adoption and the fact that there is a lot we don’t know about how AI works,” she continued. “It is becoming more humanistic all the time and capable of making decisions independently.” Shadow AI Introduces Security Blind Spots Shadow AI is the next step after shadow IT and is a growing risk, noted James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla. “Users use AI tools for content, images, or applications and to process sensitive data or company information without proper security checks,” he told TechNewsWorld. “Most organizations will have privacy, compliance, and data protection policies, and shadow AI introduces blind spots in the organization’s data loss prevention.” “The biggest risk with shadow AI is that the AI application has not passed through a security analysis as approved AI tools may have been,” explained Melissa Ruzzi, director of AI at AppOmni, a SaaS security management software company, in San Mateo, Calif. “Some AI applications may be training models using your data, may not adhere to relevant regulations that your company is required to follow, and may not even have the data storage security level you deem necessary to keep your data from being exposed,” she told TechNewsWorld. “Those risks are blind spots of potential security vulnerabilities in shadow AI.” Krishna Vishnubhotla, vice president of product strategy at Zimperium, a mobile security company based in Dallas, noted that shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways. “Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control,” he told TechNewsWorld. “This creates new security and compliance risks that are harder to track and mitigate.” Vishnubhotla added that the financial impact of shadow AI varies, but unauthorized AI tools can lead to significant regulatory fines, data breaches, and loss of intellectual property. “Depending on the scale of the agency and the sensitivity of the data exposed, the costs could range from millions to potentially billions in damages due to compliance violations, remediation efforts, and reputational harm,” he said. “Federal agencies handling vast amounts of sensitive or classified information, financial institutions, and health care organizations are particularly vulnerable,” he said. “These sectors collect and analyze vast amounts of high-value data, making AI tools attractive. But without proper vetting, these tools could be easily exploited.” Shadow AI Everywhere and Easy To Use Nicole Carignan, SVP for security and AI strategy at Darktrace, a global cybersecurity AI company, predicts an explosion of tools that utilize AI and generative AI within enterprises and on devices used by employees. “In addition to managing AI tools that are built in-house, security teams will see a surge in the volume of existing tools that have new AI features and capabilities embedded, as well as a rise in shadow AI,” she told TechNewsWorld. “If the surge remains unchecked, this raises serious questions and concerns about data loss prevention, as well as compliance concerns as new regulations start to take effect.” “That will drive an increasing need for AI asset discovery — the ability for companies to identify and track the use of AI systems throughout the enterprise,” she said. “It is imperative that CIOs and CISOs dig deep into new AI security solutions, asking comprehensive questions about data access and visibility.” Shadow AI has become so rampant because it is everywhere and easy to access through free tools, maintained Komprise’s Subramanian. “All you need is a web browser,” she said. “Enterprise users can inadvertently share company code snippets or corporate data when using these Gen AI tools, which could create data leakage.” “These tools are growing and changing exponentially,” she continued. “It’s really hard to keep up. As the IT leader, how do you track this and determine the risk? Managers might be looking the other way because their teams are getting more done. You may need fewer contractors and full-time employees. But I think the risk of the tools is not well understood.” “The low, or in some cases non-existent, learning curve associated with using Gen AI services has led to rapid adoption, regardless of prior experience with these services,” added Satyam Sinha, CEO and co-founder of Acuvity, a provider of runtime Gen AI security and governance solutions, in Sunnyvale, Calif. “Whereas shadow IT focused on addressing a specific challenge for particular employees or departments, shadow AI addresses multiple challenges for multiple employees and departments. Hence, the greater appeal,” he said. “The abundance and rapid development of Gen AI services also means employees can find the right solution. Of course, all these traits have direct security implications.” Banning AI Tools Backfires To support innovation while minimizing the threat of shadow AI, enterprises must take a three-pronged approach, asserted Kris Bondi, CEO and co-founder of Mimoto, a threat detection and response company in San Francisco. They must educate employees on the dangers of unsupported, unmonitored AI tools, create company protocols for what is not acceptable use of unauthorized AI tools, and, most importantly, provide AI tools that are sanctioned. “Explaining why one tool is sanctioned and another isn’t greatly increases compliance,” she told TechNewsWorld. “It does not work for a company to have a zero-use mandate. In fact, this results in an increase in stealth use of shadow AI.” In the very near future, more and more applications will be leveraging AI in different forms, so the reality of shadow AI will be present more than ever, added AppOmni’s Ruzzi. “The best strategy here is employee training and AI usage monitoring,” she said. “It will become crucial to have in place a powerful SaaS security tool that can go beyond detecting direct AI usage of chatbots to detect AI usage connected to other applications,” she continued, “allowing for early discovery, proper risk assessment, and containment to minimize possible negative consequences.” “Shadow AI is just the beginning,” KnowBe4’s McQuiggan added. “As more teams use AI, the risks grow.” He recommended that companies start small, identify what’s being used, and build from there. They should also get legal, HR, and compliance involved. “Make AI governance part of your broader security program,” he said. “The sooner you start, the better you can manage what comes next.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in IT Leadership #pros #extremely #worried #about #shadow
    WWW.TECHNEWSWORLD.COM
    IT Pros ‘Extremely Worried’ About Shadow AI: Report
    IT Pros ‘Extremely Worried’ About Shadow AI: Report By John P. Mello Jr. June 4, 2025 5:00 AM PT ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. Shadow AI — the use of AI tools under the radar of IT departments — has information technology directors and executives worried, according to a report released Tuesday. The report, based on a survey of 200 IT directors and executives at U.S. enterprise organizations of 1,000 employees or more, found nearly half the IT pros (46%) were “extremely worried” about shadow AI, and almost all of them (90%) were concerned about it from a privacy and security viewpoint. “As our survey found, shadow AI is resulting in palpable, concerning outcomes, with nearly 80% of IT leaders saying it has resulted in negative incidents such as sensitive data leakage to Gen AI tools, false or inaccurate results, and legal risks of using copyrighted information,” said Krishna Subramanian, co-founder of Campbell, Calif.-based Komprise, the unstructured data management company that produced the report. “Alarmingly, 13% say that shadow AI has caused financial or reputational harm to their organizations,” she told TechNewsWorld. Subramanian added that shadow AI poses a much greater problem than shadow IT, which primarily focuses on departmental power users purchasing cloud instances or SaaS tools without obtaining IT approval. “Now we’ve got an unlimited number of employees using tools like ChatGPT or Claude AI to get work done, but not understanding the potential risk they are putting their organizations at by inadvertently submitting company secrets or customer data into the chat prompt,” she explained. “The data risk is large and growing in still unforeseen ways because of the pace of AI development and adoption and the fact that there is a lot we don’t know about how AI works,” she continued. “It is becoming more humanistic all the time and capable of making decisions independently.” Shadow AI Introduces Security Blind Spots Shadow AI is the next step after shadow IT and is a growing risk, noted James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla. “Users use AI tools for content, images, or applications and to process sensitive data or company information without proper security checks,” he told TechNewsWorld. “Most organizations will have privacy, compliance, and data protection policies, and shadow AI introduces blind spots in the organization’s data loss prevention.” “The biggest risk with shadow AI is that the AI application has not passed through a security analysis as approved AI tools may have been,” explained Melissa Ruzzi, director of AI at AppOmni, a SaaS security management software company, in San Mateo, Calif. “Some AI applications may be training models using your data, may not adhere to relevant regulations that your company is required to follow, and may not even have the data storage security level you deem necessary to keep your data from being exposed,” she told TechNewsWorld. “Those risks are blind spots of potential security vulnerabilities in shadow AI.” Krishna Vishnubhotla, vice president of product strategy at Zimperium, a mobile security company based in Dallas, noted that shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways. “Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control,” he told TechNewsWorld. “This creates new security and compliance risks that are harder to track and mitigate.” Vishnubhotla added that the financial impact of shadow AI varies, but unauthorized AI tools can lead to significant regulatory fines, data breaches, and loss of intellectual property. “Depending on the scale of the agency and the sensitivity of the data exposed, the costs could range from millions to potentially billions in damages due to compliance violations, remediation efforts, and reputational harm,” he said. “Federal agencies handling vast amounts of sensitive or classified information, financial institutions, and health care organizations are particularly vulnerable,” he said. “These sectors collect and analyze vast amounts of high-value data, making AI tools attractive. But without proper vetting, these tools could be easily exploited.” Shadow AI Everywhere and Easy To Use Nicole Carignan, SVP for security and AI strategy at Darktrace, a global cybersecurity AI company, predicts an explosion of tools that utilize AI and generative AI within enterprises and on devices used by employees. “In addition to managing AI tools that are built in-house, security teams will see a surge in the volume of existing tools that have new AI features and capabilities embedded, as well as a rise in shadow AI,” she told TechNewsWorld. “If the surge remains unchecked, this raises serious questions and concerns about data loss prevention, as well as compliance concerns as new regulations start to take effect.” “That will drive an increasing need for AI asset discovery — the ability for companies to identify and track the use of AI systems throughout the enterprise,” she said. “It is imperative that CIOs and CISOs dig deep into new AI security solutions, asking comprehensive questions about data access and visibility.” Shadow AI has become so rampant because it is everywhere and easy to access through free tools, maintained Komprise’s Subramanian. “All you need is a web browser,” she said. “Enterprise users can inadvertently share company code snippets or corporate data when using these Gen AI tools, which could create data leakage.” “These tools are growing and changing exponentially,” she continued. “It’s really hard to keep up. As the IT leader, how do you track this and determine the risk? Managers might be looking the other way because their teams are getting more done. You may need fewer contractors and full-time employees. But I think the risk of the tools is not well understood.” “The low, or in some cases non-existent, learning curve associated with using Gen AI services has led to rapid adoption, regardless of prior experience with these services,” added Satyam Sinha, CEO and co-founder of Acuvity, a provider of runtime Gen AI security and governance solutions, in Sunnyvale, Calif. “Whereas shadow IT focused on addressing a specific challenge for particular employees or departments, shadow AI addresses multiple challenges for multiple employees and departments. Hence, the greater appeal,” he said. “The abundance and rapid development of Gen AI services also means employees can find the right solution [instantly]. Of course, all these traits have direct security implications.” Banning AI Tools Backfires To support innovation while minimizing the threat of shadow AI, enterprises must take a three-pronged approach, asserted Kris Bondi, CEO and co-founder of Mimoto, a threat detection and response company in San Francisco. They must educate employees on the dangers of unsupported, unmonitored AI tools, create company protocols for what is not acceptable use of unauthorized AI tools, and, most importantly, provide AI tools that are sanctioned. “Explaining why one tool is sanctioned and another isn’t greatly increases compliance,” she told TechNewsWorld. “It does not work for a company to have a zero-use mandate. In fact, this results in an increase in stealth use of shadow AI.” In the very near future, more and more applications will be leveraging AI in different forms, so the reality of shadow AI will be present more than ever, added AppOmni’s Ruzzi. “The best strategy here is employee training and AI usage monitoring,” she said. “It will become crucial to have in place a powerful SaaS security tool that can go beyond detecting direct AI usage of chatbots to detect AI usage connected to other applications,” she continued, “allowing for early discovery, proper risk assessment, and containment to minimize possible negative consequences.” “Shadow AI is just the beginning,” KnowBe4’s McQuiggan added. “As more teams use AI, the risks grow.” He recommended that companies start small, identify what’s being used, and build from there. They should also get legal, HR, and compliance involved. “Make AI governance part of your broader security program,” he said. “The sooner you start, the better you can manage what comes next.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in IT Leadership
    Like
    Love
    Wow
    Sad
    Angry
    229
    0 Commentarii 0 Distribuiri 0 previzualizare
  • LinkedIn CEO to now also oversee Microsoft Office and M365 Copilot

    Microsoft has tapped LinkedIn CEO Ryan Roslansky for a dual role leading Microsoft Office and M365 Copilot as the tech company looks to dominate in the enterprise productivity space.

    Roslansky will continue to serve as LinkedIn CEO, reporting to Microsoft CEO Satya Nadella, as he takes on his new role as EVP of Office under EVP Rajesh Jha. He announced the promotion on LinkedIn.

    The popular social and recruiting platform for enterprise professionals has steadily increased its revenues and launched new AI-powered products under Roslansky’s leadership, and Microsoft’s move reflects its intent to go all-in on AI.

    “LinkedIn has been especially successful at building and extending products over time,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “There is no doubt that Microsoft wants to bring that expertise to  Microsoft 365, especially in the adoption of Copilot.”

    Successful product leader turned CEO

    Roslansky will now oversee Office M365 productivity software, which includes Word, Excel, PowerPoint, Outlook, and Teams. Microsoft’s AI assistant, M365 Copilot, which launched in 2020, will also be under his purview.

    Roslansky has spent 16 years at LinkedIn, five of those as its CEO. Previously, he was SVP of products and content at Glam Media, and general manager and product manager at Yahoo.

    Microsoft bought LinkedIn for billion in 2016, and in his LinkedIn post, Roslansky called it “one of Microsoft’s most successful acquisitions.” The platform for connecting business professionals achieved billion in revenues in 2024, up from billion in 2022. LinkedIn has launched numerous AI products in recent years, including AI-assisted messaging, search, and projects, automated follow-ups, gauging candidate likelihood of interest, and resumé search.

    “Roslansky is a successful product leader turned CEO of a subsidiary company,” said Jeremy Roberts, senior director of research and content at Info-Tech Research Group. “He has a good track record of growing LinkedIn’s revenue year-over-year and largely keeping the platform out of trouble.”

    Roberts noted that his product bona fides will be “especially useful” as Microsoft figures out how to fit Copilot into its broader product offerings and consolidate its AI strategy between divisions.

    Amalgam Insights’ Park pointed out that every enterprise application vendor “desperately” wants to own the business AI usage market, and Microsoft is looking to increase the amount of screen time users have with Office 365.

    “Roslansky‘s success in building LinkedIn as a platform demonstrates the potential to have similar success with 365,” he said.

    Redefining Microsoft and LinkedIn

    In his LinkedIn post, Roslansky called Microsoft Office “one of the most iconic product suites in history” that has “shaped how the world works, literally.” He noted that he is coming into the role in “a new, exciting era where productivity, connection, and AI are converging at scale.”

    “Both Office and LinkedIn are used daily by professionals globally, and I’m looking forward to redefining ourselves in this new world,” he wrote.

    Roberts noted that pushing deeper integration between its product lines and de-duplicating development efforts is probably also part of Microsoft’s motive for the hire. However, it doesn’t necessarily mean that there will be all sorts of Microsoft Office features natively built into LinkedIn, such as the ability to ask Copilot to build a slideshow in PowerPoint from within LinkedIn, but he believes we could see some rationalization of back-end platforms and services.

    “LinkedIn has operated quite independently, so this could be part of a broader effort to fold it in, realize some efficiencies, and further Microsoft’s AI ambitions,” said Roberts. On the other hand, it could also be a circumstance where Microsoft had a product in need of a leader, and a successful product leader looking to expand his portfolio.

    Roberts also emphasized that being in charge of Microsoft Office and M365 Copilot is not the same as being in charge of Microsoft 365, which includes enterprise mobility and security, Windows 11, and a number of other applications.

    “So it’s both big news and a relatively minor shakeup, depending on what Nadella intends with this move,” said Roberts.
    #linkedin #ceo #now #also #oversee
    LinkedIn CEO to now also oversee Microsoft Office and M365 Copilot
    Microsoft has tapped LinkedIn CEO Ryan Roslansky for a dual role leading Microsoft Office and M365 Copilot as the tech company looks to dominate in the enterprise productivity space. Roslansky will continue to serve as LinkedIn CEO, reporting to Microsoft CEO Satya Nadella, as he takes on his new role as EVP of Office under EVP Rajesh Jha. He announced the promotion on LinkedIn. The popular social and recruiting platform for enterprise professionals has steadily increased its revenues and launched new AI-powered products under Roslansky’s leadership, and Microsoft’s move reflects its intent to go all-in on AI. “LinkedIn has been especially successful at building and extending products over time,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “There is no doubt that Microsoft wants to bring that expertise to  Microsoft 365, especially in the adoption of Copilot.” Successful product leader turned CEO Roslansky will now oversee Office M365 productivity software, which includes Word, Excel, PowerPoint, Outlook, and Teams. Microsoft’s AI assistant, M365 Copilot, which launched in 2020, will also be under his purview. Roslansky has spent 16 years at LinkedIn, five of those as its CEO. Previously, he was SVP of products and content at Glam Media, and general manager and product manager at Yahoo. Microsoft bought LinkedIn for billion in 2016, and in his LinkedIn post, Roslansky called it “one of Microsoft’s most successful acquisitions.” The platform for connecting business professionals achieved billion in revenues in 2024, up from billion in 2022. LinkedIn has launched numerous AI products in recent years, including AI-assisted messaging, search, and projects, automated follow-ups, gauging candidate likelihood of interest, and resumé search. “Roslansky is a successful product leader turned CEO of a subsidiary company,” said Jeremy Roberts, senior director of research and content at Info-Tech Research Group. “He has a good track record of growing LinkedIn’s revenue year-over-year and largely keeping the platform out of trouble.” Roberts noted that his product bona fides will be “especially useful” as Microsoft figures out how to fit Copilot into its broader product offerings and consolidate its AI strategy between divisions. Amalgam Insights’ Park pointed out that every enterprise application vendor “desperately” wants to own the business AI usage market, and Microsoft is looking to increase the amount of screen time users have with Office 365. “Roslansky‘s success in building LinkedIn as a platform demonstrates the potential to have similar success with 365,” he said. Redefining Microsoft and LinkedIn In his LinkedIn post, Roslansky called Microsoft Office “one of the most iconic product suites in history” that has “shaped how the world works, literally.” He noted that he is coming into the role in “a new, exciting era where productivity, connection, and AI are converging at scale.” “Both Office and LinkedIn are used daily by professionals globally, and I’m looking forward to redefining ourselves in this new world,” he wrote. Roberts noted that pushing deeper integration between its product lines and de-duplicating development efforts is probably also part of Microsoft’s motive for the hire. However, it doesn’t necessarily mean that there will be all sorts of Microsoft Office features natively built into LinkedIn, such as the ability to ask Copilot to build a slideshow in PowerPoint from within LinkedIn, but he believes we could see some rationalization of back-end platforms and services. “LinkedIn has operated quite independently, so this could be part of a broader effort to fold it in, realize some efficiencies, and further Microsoft’s AI ambitions,” said Roberts. On the other hand, it could also be a circumstance where Microsoft had a product in need of a leader, and a successful product leader looking to expand his portfolio. Roberts also emphasized that being in charge of Microsoft Office and M365 Copilot is not the same as being in charge of Microsoft 365, which includes enterprise mobility and security, Windows 11, and a number of other applications. “So it’s both big news and a relatively minor shakeup, depending on what Nadella intends with this move,” said Roberts. #linkedin #ceo #now #also #oversee
    WWW.COMPUTERWORLD.COM
    LinkedIn CEO to now also oversee Microsoft Office and M365 Copilot
    Microsoft has tapped LinkedIn CEO Ryan Roslansky for a dual role leading Microsoft Office and M365 Copilot as the tech company looks to dominate in the enterprise productivity space. Roslansky will continue to serve as LinkedIn CEO, reporting to Microsoft CEO Satya Nadella, as he takes on his new role as EVP of Office under EVP Rajesh Jha. He announced the promotion on LinkedIn. The popular social and recruiting platform for enterprise professionals has steadily increased its revenues and launched new AI-powered products under Roslansky’s leadership, and Microsoft’s move reflects its intent to go all-in on AI. “LinkedIn has been especially successful at building and extending products over time,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “There is no doubt that Microsoft wants to bring that expertise to  Microsoft 365, especially in the adoption of Copilot.” Successful product leader turned CEO Roslansky will now oversee Office M365 productivity software, which includes Word, Excel, PowerPoint, Outlook, and Teams. Microsoft’s AI assistant, M365 Copilot, which launched in 2020, will also be under his purview. Roslansky has spent 16 years at LinkedIn, five of those as its CEO. Previously, he was SVP of products and content at Glam Media, and general manager and product manager at Yahoo. Microsoft bought LinkedIn for $27 billion in 2016, and in his LinkedIn post, Roslansky called it “one of Microsoft’s most successful acquisitions.” The platform for connecting business professionals achieved $16.37 billion in revenues in 2024, up from $14.9 billion in 2022. LinkedIn has launched numerous AI products in recent years, including AI-assisted messaging, search, and projects, automated follow-ups, gauging candidate likelihood of interest, and resumé search. “Roslansky is a successful product leader turned CEO of a subsidiary company,” said Jeremy Roberts, senior director of research and content at Info-Tech Research Group. “He has a good track record of growing LinkedIn’s revenue year-over-year and largely keeping the platform out of trouble.” Roberts noted that his product bona fides will be “especially useful” as Microsoft figures out how to fit Copilot into its broader product offerings and consolidate its AI strategy between divisions. Amalgam Insights’ Park pointed out that every enterprise application vendor “desperately” wants to own the business AI usage market, and Microsoft is looking to increase the amount of screen time users have with Office 365. “Roslansky‘s success in building LinkedIn as a platform demonstrates the potential to have similar success with 365,” he said. Redefining Microsoft and LinkedIn In his LinkedIn post, Roslansky called Microsoft Office “one of the most iconic product suites in history” that has “shaped how the world works, literally.” He noted that he is coming into the role in “a new, exciting era where productivity, connection, and AI are converging at scale.” “Both Office and LinkedIn are used daily by professionals globally, and I’m looking forward to redefining ourselves in this new world,” he wrote. Roberts noted that pushing deeper integration between its product lines and de-duplicating development efforts is probably also part of Microsoft’s motive for the hire. However, it doesn’t necessarily mean that there will be all sorts of Microsoft Office features natively built into LinkedIn, such as the ability to ask Copilot to build a slideshow in PowerPoint from within LinkedIn, but he believes we could see some rationalization of back-end platforms and services. “LinkedIn has operated quite independently, so this could be part of a broader effort to fold it in, realize some efficiencies, and further Microsoft’s AI ambitions,” said Roberts. On the other hand, it could also be a circumstance where Microsoft had a product in need of a leader, and a successful product leader looking to expand his portfolio. Roberts also emphasized that being in charge of Microsoft Office and M365 Copilot is not the same as being in charge of Microsoft 365, which includes enterprise mobility and security, Windows 11, and a number of other applications. “So it’s both big news and a relatively minor shakeup, depending on what Nadella intends with this move,” said Roberts.
    Like
    Love
    Wow
    Sad
    Angry
    187
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Amazon Programmers Say What Happened After Turn to AI Was Dark

    The shoehorning of AI into everything has programmers feeling less like the tedious parts of their jobs are being smoothly automated, and more like their work is beginning to resemble the drudgery of toiling away in one of the e-commerce giant's vast warehouses.That's the bleak picture painted in new reporting from the New York Times, in which Amazon leadership — as is the case at so many other companies — is convinced that AI will marvelously jack up productivity. Tasked with conjuring the tech's mystic properties, of course, are our beleaguered keyboard-clackers.Today, there's no shortage of coding AI assistants to choose from. Google and Meta are making heavy use of them, as is Microsoft. Satya Nadella, CEO of the Redmond giant, estimates that as much as 30 percent of the company's code is now written with AI. If Amazon's to keep up with the competition, it needs to follow suit. CEO Andy Jassy echoed this in a recent letter to shareholders, cited by the NYT, emphasizing the need to give customers what they want as "quickly as possible," before upholding programming as a field in which AI would "change the norms."And that it has — though this is less due to the merits of AI and more the result of the over-eager opportunism of the company's management. Three Amazon engineers told the NYT that their bosses have increasingly pushed them to use AI in their work over the past year. And with that came increased output goals and even tighter deadlines. One engineer said that his team was reduced to roughly half the size it was last year — but it was still expected to produce the same amount of code by using AI.In short, new automating technology is being used to justify placing increased demands at their jobs."Things look like a speed-up for knowledge workers," Lawrence Katz, a labor economist at Harvard University, told the NYT, citing ongoing research. "There is a sense that the employer can pile on more stuff."Adopting AI was ostensibly optional for the Amazon programmers, but the choice was all but made for them. One engineer told the newspaper that they're now expected to finish building new website features in just a few days, whereas before they had several weeks. This ludicrous ramp up is only made possible by using AI to automate some of the coding, and comes at the expense of quality: there's less time for consulting with colleagues to get feedback and bounce ideas around.Above all, AI is sapping all the joy out of their profession. AI-amalgamated code requires extensive double checking — a prominent critique that can't be ignored here and is one of the main reasons skeptics question whether these programming assistants actually produce gains in efficiency. And when you're reduced to proofreading a machine, there's little room for creativity, and an even more diminished sense of control."It's more fun to write code than to read code," Simon Willison, a programmer and blogger who's both an enthusiast of AI and a frequent critic of the tech, told the NYT, playing devil's advocate. "If you're told you have to do a code review, it's never a fun part of the job. When you're working with these tools, it's most of the job."Amazon, for its part, maintains that it conducts regular reviews to ensure that its teams are adequately staffed. "We'll continue to adapt how we incorporate Gen AI into our processes," an Amazon spokesman told the NYT.More on AI: AI Is Replacing Women's Jobs SpecificallyShare This Article
    #amazon #programmers #say #what #happened
    Amazon Programmers Say What Happened After Turn to AI Was Dark
    The shoehorning of AI into everything has programmers feeling less like the tedious parts of their jobs are being smoothly automated, and more like their work is beginning to resemble the drudgery of toiling away in one of the e-commerce giant's vast warehouses.That's the bleak picture painted in new reporting from the New York Times, in which Amazon leadership — as is the case at so many other companies — is convinced that AI will marvelously jack up productivity. Tasked with conjuring the tech's mystic properties, of course, are our beleaguered keyboard-clackers.Today, there's no shortage of coding AI assistants to choose from. Google and Meta are making heavy use of them, as is Microsoft. Satya Nadella, CEO of the Redmond giant, estimates that as much as 30 percent of the company's code is now written with AI. If Amazon's to keep up with the competition, it needs to follow suit. CEO Andy Jassy echoed this in a recent letter to shareholders, cited by the NYT, emphasizing the need to give customers what they want as "quickly as possible," before upholding programming as a field in which AI would "change the norms."And that it has — though this is less due to the merits of AI and more the result of the over-eager opportunism of the company's management. Three Amazon engineers told the NYT that their bosses have increasingly pushed them to use AI in their work over the past year. And with that came increased output goals and even tighter deadlines. One engineer said that his team was reduced to roughly half the size it was last year — but it was still expected to produce the same amount of code by using AI.In short, new automating technology is being used to justify placing increased demands at their jobs."Things look like a speed-up for knowledge workers," Lawrence Katz, a labor economist at Harvard University, told the NYT, citing ongoing research. "There is a sense that the employer can pile on more stuff."Adopting AI was ostensibly optional for the Amazon programmers, but the choice was all but made for them. One engineer told the newspaper that they're now expected to finish building new website features in just a few days, whereas before they had several weeks. This ludicrous ramp up is only made possible by using AI to automate some of the coding, and comes at the expense of quality: there's less time for consulting with colleagues to get feedback and bounce ideas around.Above all, AI is sapping all the joy out of their profession. AI-amalgamated code requires extensive double checking — a prominent critique that can't be ignored here and is one of the main reasons skeptics question whether these programming assistants actually produce gains in efficiency. And when you're reduced to proofreading a machine, there's little room for creativity, and an even more diminished sense of control."It's more fun to write code than to read code," Simon Willison, a programmer and blogger who's both an enthusiast of AI and a frequent critic of the tech, told the NYT, playing devil's advocate. "If you're told you have to do a code review, it's never a fun part of the job. When you're working with these tools, it's most of the job."Amazon, for its part, maintains that it conducts regular reviews to ensure that its teams are adequately staffed. "We'll continue to adapt how we incorporate Gen AI into our processes," an Amazon spokesman told the NYT.More on AI: AI Is Replacing Women's Jobs SpecificallyShare This Article #amazon #programmers #say #what #happened
    FUTURISM.COM
    Amazon Programmers Say What Happened After Turn to AI Was Dark
    The shoehorning of AI into everything has programmers at Amazon feeling less like the tedious parts of their jobs are being smoothly automated, and more like their work is beginning to resemble the drudgery of toiling away in one of the e-commerce giant's vast warehouses.That's the bleak picture painted in new reporting from the New York Times, in which Amazon leadership — as is the case at so many other companies — is convinced that AI will marvelously jack up productivity. Tasked with conjuring the tech's mystic properties, of course, are our beleaguered keyboard-clackers.Today, there's no shortage of coding AI assistants to choose from. Google and Meta are making heavy use of them, as is Microsoft. Satya Nadella, CEO of the Redmond giant, estimates that as much as 30 percent of the company's code is now written with AI. If Amazon's to keep up with the competition, it needs to follow suit. CEO Andy Jassy echoed this in a recent letter to shareholders, cited by the NYT, emphasizing the need to give customers what they want as "quickly as possible," before upholding programming as a field in which AI would "change the norms."And that it has — though this is less due to the merits of AI and more the result of the over-eager opportunism of the company's management. Three Amazon engineers told the NYT that their bosses have increasingly pushed them to use AI in their work over the past year. And with that came increased output goals and even tighter deadlines. One engineer said that his team was reduced to roughly half the size it was last year — but it was still expected to produce the same amount of code by using AI.In short, new automating technology is being used to justify placing increased demands at their jobs."Things look like a speed-up for knowledge workers," Lawrence Katz, a labor economist at Harvard University, told the NYT, citing ongoing research. "There is a sense that the employer can pile on more stuff."Adopting AI was ostensibly optional for the Amazon programmers, but the choice was all but made for them. One engineer told the newspaper that they're now expected to finish building new website features in just a few days, whereas before they had several weeks. This ludicrous ramp up is only made possible by using AI to automate some of the coding, and comes at the expense of quality: there's less time for consulting with colleagues to get feedback and bounce ideas around.Above all, AI is sapping all the joy out of their profession. AI-amalgamated code requires extensive double checking — a prominent critique that can't be ignored here and is one of the main reasons skeptics question whether these programming assistants actually produce gains in efficiency. And when you're reduced to proofreading a machine, there's little room for creativity, and an even more diminished sense of control."It's more fun to write code than to read code," Simon Willison, a programmer and blogger who's both an enthusiast of AI and a frequent critic of the tech, told the NYT, playing devil's advocate. "If you're told you have to do a code review, it's never a fun part of the job. When you're working with these tools, it's most of the job."Amazon, for its part, maintains that it conducts regular reviews to ensure that its teams are adequately staffed. "We'll continue to adapt how we incorporate Gen AI into our processes," an Amazon spokesman told the NYT.More on AI: AI Is Replacing Women's Jobs SpecificallyShare This Article
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Former PlayStation Exec Warns Developers Against Relying Too Much on Subscription Services

    Former PlayStation executive Shuhei Yoshida, in an interview with Game Developer at Gamescom LATAM, has warned developers against relying too heavily on subscription services. According to Yoshida, game subscription services, like Xbox Game Pass, can be “really dangerous”, since these services could start dictating what kinds of games developers would be able to make.
    Yoshida expanded on this idea by mentioning that big companies – who typically tend to be averse to funding games that are based on big, risky ideas – would try to steer developers under them to safer genres or gameplay styles to appease a player base that might end up existing primarily on subscription services.
    “If the only way for people to play games is through subscriptions that’s really dangerous, because whatof games can be created will be dictated by the owner of the subscription services,” said Yoshida.
    “That’s really, really risky because there always must always be fresh new ideas tried by small developers that create the next wave of development. But if the big companies dictate what games can be created, I don’t think that will advance the industry.”
    Yoshida also believes that Sony’s approach to a subscription service, through some of the higher tiers available for PlayStation Plus, might be “healthier” for developers and the overall industry. While he does acknowledge that his time working for Sony might have biased him a bit in the company’s favour, Yoshida also says that, through PlayStation Plus, Sony avoids over-promising, while also encouraging players to buy games rather than to wait for the games to come to the service.
    “I believe the way Sony approachedis healthier. You know, not to overpromise and to allow people to spend money to buy the new games,” Yoshida said. “After a couple of years there won’t be many people willing to buy those games at that initial price, so they’ll be added to the subscription service and there’ll be more people to tryin time for the next game in the franchise to come out.”
    When it comes to Sony’s competitors in the console market, Yoshida praises Microsoft for its efforts in bringing backwards compatibility on Xbox Series X/S. “They must have put a lot of engineering effort in to achieve what they have done,” he said. As for Nintendo, Yoshida praises the company’s strategy, as well as the technology behind the Switch and its Joy-Con controllers. “so smart,” he said. “It’s in their DNA to cater to the needs of family and friends.”
    While Yoshida might have a point about Game Pass, Microsoft has considered it to be quite successful. During an earnings call back in January, Microsoft CEO Satya Nadella spoke about the subscription service’s growth, revealing that its subscriber base had grown by more than 30 percent.
    “All-up, Game Pass set a new quarterly record for revenue and grew its PC subscriber base by over 30%, as we focus on driving fully paid subscribers across endpoints,” said Nadella in the earnings call, who went on to praise the critical response for Indiana Jones and the Great Circle.
    #former #playstation #exec #warns #developers
    Former PlayStation Exec Warns Developers Against Relying Too Much on Subscription Services
    Former PlayStation executive Shuhei Yoshida, in an interview with Game Developer at Gamescom LATAM, has warned developers against relying too heavily on subscription services. According to Yoshida, game subscription services, like Xbox Game Pass, can be “really dangerous”, since these services could start dictating what kinds of games developers would be able to make. Yoshida expanded on this idea by mentioning that big companies – who typically tend to be averse to funding games that are based on big, risky ideas – would try to steer developers under them to safer genres or gameplay styles to appease a player base that might end up existing primarily on subscription services. “If the only way for people to play games is through subscriptions that’s really dangerous, because whatof games can be created will be dictated by the owner of the subscription services,” said Yoshida. “That’s really, really risky because there always must always be fresh new ideas tried by small developers that create the next wave of development. But if the big companies dictate what games can be created, I don’t think that will advance the industry.” Yoshida also believes that Sony’s approach to a subscription service, through some of the higher tiers available for PlayStation Plus, might be “healthier” for developers and the overall industry. While he does acknowledge that his time working for Sony might have biased him a bit in the company’s favour, Yoshida also says that, through PlayStation Plus, Sony avoids over-promising, while also encouraging players to buy games rather than to wait for the games to come to the service. “I believe the way Sony approachedis healthier. You know, not to overpromise and to allow people to spend money to buy the new games,” Yoshida said. “After a couple of years there won’t be many people willing to buy those games at that initial price, so they’ll be added to the subscription service and there’ll be more people to tryin time for the next game in the franchise to come out.” When it comes to Sony’s competitors in the console market, Yoshida praises Microsoft for its efforts in bringing backwards compatibility on Xbox Series X/S. “They must have put a lot of engineering effort in to achieve what they have done,” he said. As for Nintendo, Yoshida praises the company’s strategy, as well as the technology behind the Switch and its Joy-Con controllers. “so smart,” he said. “It’s in their DNA to cater to the needs of family and friends.” While Yoshida might have a point about Game Pass, Microsoft has considered it to be quite successful. During an earnings call back in January, Microsoft CEO Satya Nadella spoke about the subscription service’s growth, revealing that its subscriber base had grown by more than 30 percent. “All-up, Game Pass set a new quarterly record for revenue and grew its PC subscriber base by over 30%, as we focus on driving fully paid subscribers across endpoints,” said Nadella in the earnings call, who went on to praise the critical response for Indiana Jones and the Great Circle. #former #playstation #exec #warns #developers
    GAMINGBOLT.COM
    Former PlayStation Exec Warns Developers Against Relying Too Much on Subscription Services
    Former PlayStation executive Shuhei Yoshida, in an interview with Game Developer at Gamescom LATAM, has warned developers against relying too heavily on subscription services. According to Yoshida, game subscription services, like Xbox Game Pass, can be “really dangerous”, since these services could start dictating what kinds of games developers would be able to make. Yoshida expanded on this idea by mentioning that big companies – who typically tend to be averse to funding games that are based on big, risky ideas – would try to steer developers under them to safer genres or gameplay styles to appease a player base that might end up existing primarily on subscription services. “If the only way for people to play games is through subscriptions that’s really dangerous, because what [type] of games can be created will be dictated by the owner of the subscription services,” said Yoshida. “That’s really, really risky because there always must always be fresh new ideas tried by small developers that create the next wave of development. But if the big companies dictate what games can be created, I don’t think that will advance the industry.” Yoshida also believes that Sony’s approach to a subscription service, through some of the higher tiers available for PlayStation Plus, might be “healthier” for developers and the overall industry. While he does acknowledge that his time working for Sony might have biased him a bit in the company’s favour, Yoshida also says that, through PlayStation Plus, Sony avoids over-promising, while also encouraging players to buy games rather than to wait for the games to come to the service. “I believe the way Sony approached [subscriptions] is healthier. You know, not to overpromise and to allow people to spend money to buy the new games,” Yoshida said. “After a couple of years there won’t be many people willing to buy those games at that initial price, so they’ll be added to the subscription service and there’ll be more people to try [those products] in time for the next game in the franchise to come out.” When it comes to Sony’s competitors in the console market, Yoshida praises Microsoft for its efforts in bringing backwards compatibility on Xbox Series X/S. “They must have put a lot of engineering effort in to achieve what they have done,” he said. As for Nintendo, Yoshida praises the company’s strategy, as well as the technology behind the Switch and its Joy-Con controllers. “[That’s] so smart,” he said. “It’s in their DNA to cater to the needs of family and friends.” While Yoshida might have a point about Game Pass, Microsoft has considered it to be quite successful. During an earnings call back in January, Microsoft CEO Satya Nadella spoke about the subscription service’s growth, revealing that its subscriber base had grown by more than 30 percent. “All-up, Game Pass set a new quarterly record for revenue and grew its PC subscriber base by over 30%, as we focus on driving fully paid subscribers across endpoints,” said Nadella in the earnings call, who went on to praise the critical response for Indiana Jones and the Great Circle.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • What AI’s impact on individuals means for the health workforce and industry

    Transcript    
    PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.”      
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.     The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak.
    You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues.
    So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.  
    To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar.
    Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence.
    Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics.
    Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society.Here is my interview with Ethan Mollick:
    LEE: Ethan, welcome.
    ETHAN MOLLICK: So happy to be here, thank you.
    LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine.So to get started, how and why did it happen that you’ve become one of the leading experts on AI?
    MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I wasmy PhD at MIT, I worked with Marvin Minskyand the MITMedia Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it.
    And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field.
    And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start.So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question.
    LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been?
    MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now.
    One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things.
    And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever.
    So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology.
    LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty?
    MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect.
    So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better. So I think that these are not as foreign concepts, especially as medicine continues to get more complicated.
    LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI.
    MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system.
    There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing?
    The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some wayscompared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way.
    The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind.
    LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention.
    MOLLICK: Yes.
    LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot, the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point?
    MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right.
    I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?”
    So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right.
    LEE: Yes. Mm-hmm.
    MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either.
    LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever?
    MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them.And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered.
    You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete.
    What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one.
    Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet.
    LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this. 
    MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills.
    Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely.
    But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety.
    LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company.
    And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs?
    MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right.
    So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains.
    And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result.
    LEE: You know, where are those productivity gains going, then, when you get to the organizational level?
    MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right.
    Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal.
    At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen.
    So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons.
    And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves.
    So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change.
    LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI?
    MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again.
    What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field.
    So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab.
    So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how toAI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill?
    And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves.
    LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones.
    And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ?
    MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish.
    I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space.
    But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things?
    And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to.
    So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that.
    LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching?
    MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful.
    A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing.
    So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right.
    I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear.
    But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition.
    LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.”MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize.LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading?
    MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems.
    So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is.
    But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI, and it had a good overview …
    LEE: Yeah, that’s a great one.
    MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I thinkKarpathyhas some really nice videos of use that I would recommend.
    Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works.
    LEE: Yeah.
    MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right.
    LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here.
    Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCMEaccrediting body, what’s the one thing you would want them to really internalize?
    MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which, “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine.
    I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast.
    So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right.
    We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here.
    LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer.
    MOLLICK: Yes. Yes.
    LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall.
    But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea?
    MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.”Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right.
    There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right.
    LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens?
    MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people.
    So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine.
    But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point.
    Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not.
    Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get?
    LEE: Yeah.
    MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything.
    Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right.
    And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it.
    LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining.
    MOLLICK: Thank you.  
    I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work.
    One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does.
    In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate. Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI.
    The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI.
    Here’s now my interview with Azeem Azhar:
    LEE: Azeem, welcome.
    AZEEM AZHAR: Peter, thank you so much for having me. 
    LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before.
    And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day?
    AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip …
    LEE: Oh wow.
    AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started.
    And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large.
    LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through?
    AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed.
    Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th.
    And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold.
    LEE: And who’s the we that you were experimenting with?
    AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar,or they walk into our virtual team room, and we try to solve problems.
    LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.  
    And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine?
    AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that ismore broad than that.
    So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right?They’re on the tablet computers, and they’re scribing away.
    And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back, right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload.
    And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help.
    So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced.
    So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized.
    LEE: Yeah.
    AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura.
    LEE: Yup.
    AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to.
    And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health, which, I mean, does physicals, MRI scans, and blood tests, and so on.
    It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector.
    And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks.But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout.
    So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems
    LEE: I love how you break that down. And I want to press on a couple of things.
    You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated?
    AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example.
    In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different.
    I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away.
    LEE: Yeah.
    AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week.
    LEE: Right. Yeah.
    AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer.
    LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution.
    Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons.
    And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work?
    AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice.
    I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors.
    I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner.
    LEE: Yeah.
    AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly.
    LEE: Right.
    AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful.
    LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis.
    And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem?
    AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz aroundthe hospital faster and faster than ever before.
    We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right?
    LEE: Yeah, yeah.
    AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system.
    So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later …
    LEE: Right.
    AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for.
    And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system …
    LEE: Yup.
    AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that.
    So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible.
    And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Carewas one, and Narayana Hrudayalayawas another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons.
    LEE: Yeah, yep.
    AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own.
    LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop?
    AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold.
    If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system.
    LEE: Right. Yep. Yep.
    AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time.
    LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you.
    AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician.
    In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart
    I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows.Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that.
    LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time.AZHAR: Yeah, yeah. Thank god for Clippy. Yes.
    LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself, about regulation. And we made some minor commentary on that.
    And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway.
    AZHAR: Right.
    LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like?
    AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs, and there are the classic set of processes you go through.
    You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience.
    So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly.
    So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots.
    LEE: Yes.
    AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval.
    I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be.LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth?
    AZHAR: Right.
    LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow.
    AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week.
    And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician.
    LEE: Yeah.
    AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right.LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah.
    AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers …
    LEE: Yes.
    AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next.
    LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this.
    And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions?
    AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in.
    LEE: OK.AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches.
    And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time.
    LEE: Yes.
    AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety.
    And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines.
    I mean, when you think about being an athlete, which is something I think about, but I could never ever do,but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health.
    LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said.
    Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much.
    AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you.  
    I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies.
    In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.  
    Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear.
    Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level.
    Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build, which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference.
    But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing.A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in.
    Until next time.
    #what #ais #impact #individuals #means
    What AI’s impact on individuals means for the health workforce and industry
    Transcript     PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.”       This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.     The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak. You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues. So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.   To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar. Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence. Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics. Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society.Here is my interview with Ethan Mollick: LEE: Ethan, welcome. ETHAN MOLLICK: So happy to be here, thank you. LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine.So to get started, how and why did it happen that you’ve become one of the leading experts on AI? MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I wasmy PhD at MIT, I worked with Marvin Minskyand the MITMedia Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it. And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field. And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start.So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question. LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been? MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now. One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things. And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever. So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology. LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty? MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect. So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better. So I think that these are not as foreign concepts, especially as medicine continues to get more complicated. LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI. MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system. There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing? The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some wayscompared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way. The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind. LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention. MOLLICK: Yes. LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot, the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point? MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right. I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?” So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right. LEE: Yes. Mm-hmm. MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either. LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever? MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them.And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered. You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete. What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one. Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet. LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this.  MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills. Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely. But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety. LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company. And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs? MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right. So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains. And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result. LEE: You know, where are those productivity gains going, then, when you get to the organizational level? MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right. Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal. At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen. So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons. And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves. So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change. LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI? MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again. What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field. So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab. So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how toAI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill? And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves. LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones. And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ? MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish. I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space. But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things? And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to. So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that. LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching? MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful. A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing. So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right. I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear. But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition. LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.”MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize.LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading? MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems. So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is. But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI, and it had a good overview … LEE: Yeah, that’s a great one. MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I thinkKarpathyhas some really nice videos of use that I would recommend. Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works. LEE: Yeah. MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right. LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here. Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCMEaccrediting body, what’s the one thing you would want them to really internalize? MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which, “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine. I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast. So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right. We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here. LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer. MOLLICK: Yes. Yes. LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall. But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea? MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.”Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right. There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right. LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens? MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people. So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine. But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point. Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not. Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get? LEE: Yeah. MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything. Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right. And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it. LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining. MOLLICK: Thank you.   I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work. One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does. In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate. Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI. The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI. Here’s now my interview with Azeem Azhar: LEE: Azeem, welcome. AZEEM AZHAR: Peter, thank you so much for having me.  LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before. And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day? AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip … LEE: Oh wow. AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started. And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large. LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through? AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed. Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th. And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold. LEE: And who’s the we that you were experimenting with? AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar,or they walk into our virtual team room, and we try to solve problems. LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.   And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine? AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that ismore broad than that. So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right?They’re on the tablet computers, and they’re scribing away. And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back, right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload. And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help. So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced. So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized. LEE: Yeah. AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura. LEE: Yup. AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to. And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health, which, I mean, does physicals, MRI scans, and blood tests, and so on. It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector. And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks.But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout. So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems LEE: I love how you break that down. And I want to press on a couple of things. You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated? AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example. In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different. I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away. LEE: Yeah. AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week. LEE: Right. Yeah. AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer. LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution. Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons. And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work? AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice. I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors. I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner. LEE: Yeah. AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly. LEE: Right. AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful. LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis. And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem? AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz aroundthe hospital faster and faster than ever before. We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right? LEE: Yeah, yeah. AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system. So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later … LEE: Right. AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for. And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system … LEE: Yup. AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that. So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible. And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Carewas one, and Narayana Hrudayalayawas another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons. LEE: Yeah, yep. AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own. LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop? AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold. If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system. LEE: Right. Yep. Yep. AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time. LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you. AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician. In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows.Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that. LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time.AZHAR: Yeah, yeah. Thank god for Clippy. Yes. LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself, about regulation. And we made some minor commentary on that. And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway. AZHAR: Right. LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like? AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs, and there are the classic set of processes you go through. You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience. So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly. So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots. LEE: Yes. AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval. I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be.LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth? AZHAR: Right. LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow. AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week. And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician. LEE: Yeah. AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right.LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah. AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers … LEE: Yes. AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next. LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this. And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions? AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in. LEE: OK.AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches. And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time. LEE: Yes. AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety. And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines. I mean, when you think about being an athlete, which is something I think about, but I could never ever do,but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health. LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said. Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much. AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you.   I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies. In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.   Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear. Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level. Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build, which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference. But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing.A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in. Until next time. #what #ais #impact #individuals #means
    WWW.MICROSOFT.COM
    What AI’s impact on individuals means for the health workforce and industry
    Transcript [MUSIC]    [BOOK PASSAGE]  PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.” [END OF BOOK PASSAGE]    [THEME MUSIC]    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.      [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak. You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues. So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.   To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar. Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence. Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics. Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society. [TRANSITION MUSIC] Here is my interview with Ethan Mollick: LEE: Ethan, welcome. ETHAN MOLLICK: So happy to be here, thank you. LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine. [LAUGHTER] So to get started, how and why did it happen that you’ve become one of the leading experts on AI? MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I was [getting] my PhD at MIT, I worked with Marvin Minsky (opens in new tab) and the MIT [Massachusetts Institute of Technology] Media Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it. And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field. And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start. [LAUGHTER] So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question. LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been? MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now. One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things. And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever. So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology. LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty? MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect. So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better (opens in new tab). So I think that these are not as foreign concepts, especially as medicine continues to get more complicated. LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI. MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system. There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing? The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some ways [LAUGHTER] compared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way. The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind. LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention. MOLLICK: Yes. LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot (opens in new tab), the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point? MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right. I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?” So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right. LEE: Yes. Mm-hmm. MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either. LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever? MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them. [LAUGHTER] And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered. You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete. What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one. Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet. LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this.  MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills. Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely. But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety. LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company. And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs? MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right. So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains. And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result. LEE: You know, where are those productivity gains going, then, when you get to the organizational level? MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right. Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal. At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen. So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons. And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves. So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change. LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI? MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again. What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field. So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab. So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how to [get] AI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill? And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves. LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones. And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ? MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish. I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space. But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things? And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to. So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that. LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching? MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful. A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing. So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right. I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear. But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition. LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.” [LAUGHS] MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize. [LAUGHTER] LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading? MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems. So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is. But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI (opens in new tab), and it had a good overview … LEE: Yeah, that’s a great one. MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I think [Andrej] Karpathy (opens in new tab) has some really nice videos of use that I would recommend. Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works. LEE: Yeah. MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right. LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here. Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCME [Liaison Committee on Medical Education] accrediting body, what’s the one thing you would want them to really internalize? MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which [is], “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine. I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast. So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right. We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here. LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer. MOLLICK: Yes. Yes. LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall. But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea? MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.” [LAUGHTER] Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right. There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right. LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens? MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people. So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine. But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point. Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not. Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get? LEE: Yeah. MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything. Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right. And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it. LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining. MOLLICK: Thank you. [TRANSITION MUSIC]   I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work. One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does. In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate (opens in new tab). Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI. The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI. Here’s now my interview with Azeem Azhar: LEE: Azeem, welcome. AZEEM AZHAR: Peter, thank you so much for having me.  LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before. And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day? AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip … LEE: Oh wow. AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started. And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large. LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through? AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed. Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th. And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold. LEE: And who’s the we that you were experimenting with? AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar, [LAUGHTER] or they walk into our virtual team room, and we try to solve problems. LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.   And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine? AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that is [LAUGHS] more broad than that. So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right? [LAUGHTER] They’re on the tablet computers, and they’re scribing away. And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back (opens in new tab), right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload. And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help. So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced. So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized. LEE: Yeah. AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura (opens in new tab). LEE: Yup. AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to. And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health (opens in new tab), which, I mean, does physicals, MRI scans, and blood tests, and so on. It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector. And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks. [LAUGHTER] But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout. So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems LEE: I love how you break that down. And I want to press on a couple of things. You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated? AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example. In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different. I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away. LEE: Yeah. AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week. LEE: Right. Yeah. AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer. LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution. Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons. And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work? AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice. I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors. I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner. LEE: Yeah. AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly. LEE: Right. AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful. LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis. And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem? AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz around [LAUGHTER] the hospital faster and faster than ever before. We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right? LEE: Yeah, yeah. AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system. So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later … LEE: Right. AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for. And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system … LEE: Yup. AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that. So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible. And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Care (opens in new tab) was one, and Narayana Hrudayalaya [now known as Narayana Health (opens in new tab)] was another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons. LEE: Yeah, yep. AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own. LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop? AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold. If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system. LEE: Right. Yep. Yep. AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time. LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you. AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician. In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows. [LAUGHS] Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that. LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time. [LAUGHS] AZHAR: Yeah, yeah. Thank god for Clippy. Yes. LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself [LAUGHS], about regulation. And we made some minor commentary on that. And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway. AZHAR: Right. LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like? AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs [randomized control trials], and there are the classic set of processes you go through. You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience. So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly. So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots [very rapidly]. LEE: Yes. AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval. I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be. [LAUGHTER] LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth? AZHAR: Right. LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow. AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week. And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician. LEE: Yeah. AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right. [LAUGHTER] LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah. AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers … LEE: Yes. AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM [continuous glucose monitor]. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next. LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this. And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions? AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in. LEE: OK. [LAUGHS] AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches. And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time. LEE: Yes. AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety. And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines. I mean, when you think about being an athlete, which is something I think about, but I could never ever do, [LAUGHTER] but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health. LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said. Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much. AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you. [TRANSITION MUSIC]   I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies. In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.   Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear. Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level. Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build (opens in new tab), which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference. But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing. [THEME MUSIC] A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in. Until next time. [MUSIC FADES]
    11 Commentarii 0 Distribuiri 0 previzualizare
  • AI is rotting your brain and making you stupid

    For nearly 10 years I have written about science and technology and I’ve been an early adopter of new tech for much longer. As a teenager in the mid-1990s I annoyed the hell out of my family by jamming up the phone line for hours with a dial-up modem; connecting to bulletin board communities all over the country.When I started writing professionally about technology in 2016 I was all for our seemingly inevitable transhumanist future. When the chip is ready I want it immediately stuck in my head, I remember saying proudly in our busy office. Why not improve ourselves where we can?Since then, my general view on technology has dramatically shifted. Watching a growing class of super-billionaires erode the democratizing nature of technology by maintaining corporate controls over what we use and how we use it has fundamentally changed my personal relationship with technology. Seeing deeply disturbing philosophical stances like longtermism, effective altruism, and singulartarianism envelop the minds of those rich, powerful men controlling the world has only further entrenched inequality.A recent Black Mirror episode really rammed home the perils we face by having technology so controlled by capitalist interests. A sick woman is given a brain implant connected to a cloud server to keep her alive. The system is managed through a subscription service where the user pays for monthly access to the cognitive abilities managed by the implant. As time passes, that subscription cost gets more and more expensive - and well, it’s Black Mirror, so you can imagine where things end up.

    Titled 'Common People', the episode is from series 7 of Black MirrorNetflix

    The enshittification of our digital world has been impossible to ignore. You’re not imagining things, Google Search is getting worse.But until the emergence of AII’ve never been truly concerned about a technological innovation, in and of itself.A recent article looked at how generative AI tech such as ChatGPT is being used by university students. The piece was authored by a tech admin at New York University and it’s filled with striking insights into how AI is shaking the foundations of educational institutions.Not unsurprisingly, students are using ChatGPT for everything from summarizing complex texts to completely writing essays from scratch. But one of the reflections quoted in the article immediately jumped out at me.When a student was asked why they relied on generative AI so much when putting work together they responded, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”My first response was, of course, why wouldn’t you? It made complete sense.For a second.And then I thought, hang on, what is being lost by speeding from point A to point B in a car?

    What if the quickest way from point A to point B wasn't the best way to get there?Depositphotos

    Let’s further the analogy. You need to go to the grocery store. It’s a 10-minute walk away but a three-minute drive. Why wouldn’t you drive?Well, the only benefit of driving is saving time. That’s inarguable. You’ll be back home and cooking up your dinner before the person on foot even gets to the grocery store.Congratulations. You saved yourself about 20 minutes. In a world where efficiency trumps everything this is the best choice. Use that extra 20 minutes in your day wisely.But what are the benefits of not driving, taking the extra time, and walking?First, you have environmental benefits. Not using a car unnecessarily; spewing emissions into the air, either directly from combustion or indirectly for those with electric cars.Secondly, you have health benefits from the little bit of exercise you get by walking. Our stationary lives are quite literally killing us so a 20-minute walk a day is likely to be incredibly positive for your health.But there are also more abstract benefits to be gained by walking this short trip from A to B.Walking connects us to our neighborhood. It slows things down. Helps us better understand the community and environment we are living in. A recent study summarized the benefits of walking around your neighborhood, suggesting the practice leads to greater social connectedness and reduced feelings of isolation.So what are we losing when we use a car to get from point A to point B? Potentially a great deal.But let’s move out of abstraction and into the real world.An article in the Columbia Journalism Review asked nearly 20 news media professionals how they were integrating AI into their personal workflow. The responses were wildly varied. Some journalists refused to use AI for anything more than superficial interview transcription, while others use it broadly, to edit text, answer research questions, summarize large bodies of science text, or search massive troves of data for salient bits of information.In general, the line almost all those media professionals shared was they would never explicitly use AI to write their articles. But for some, almost every other stage of the creative process in developing a story was fair game for AI assistance.I found this a little horrifying. Farming out certain creative development processes to AI felt not only ethically wrong but also like key cognitive stages were being lost, skipped over, considered unimportant.I’ve never considered myself to be an extraordinarily creative person. I don’t feel like I come up with new or original ideas when I work. Instead, I see myself more as a compiler. I enjoy finding connections between seemingly disparate things. Linking ideas and using those pieces as building blocks to create my own work. As a writer and journalist I see this process as the whole point.A good example of this is a story I published in late 2023 investigating the relationship between long Covid and psychedelics. The story began earlier in the year when I read an intriguing study linking long Covid with serotonin abnormalities in the gut. Being interested in the science of psychedelics, and knowing that psychedelics very much influence serotonin receptors, I wondered if there could be some kind of link between these two seemingly disparate topics.The idea sat in the back of my mind for several months, until I came across a person who told me they had been actively treating their own long Covid symptoms with a variety of psychedelic remedies. After an expansive and fascinating interview I started diving into different studies looking to understand how certain psychedelics affect the body, and whether there could be any associations with long Covid treatments.Eventually I stumbled across a few compelling associations. It took weeks of reading different scientific studies, speaking to various researchers, and thinking about how several discordant threads could be somehow linked.Could AI have assisted me in the process of developing this story?No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience.And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence.

    LLMs are a sophisticated language imitator, delivering responses that resemble what they think a response would look likeDepositphotos

    ChatGPT, and all the assorted clones that have emerged over the last couple of years, are a form of technology called LLMs. At the risk of enraging those who actually work in this mind-bendingly complex field, I’m going to dangerously over-simplify how these things work.It’s important to know that when you ask a system like ChatGPT a question it doesn’t understand what you are asking it. The response these systems generate to any prompt is simply a simulation of what it computes a response would look like based on a massive dataset.So if I were to ask the system a random question like, “What color are cats?”, the system would scrape the world’s trove of information on cats and colors to create a response that mirrors the way most pre-existing text talks about cats and colors. The system builds its response word by word, creating something that reads coherently to us, by establishing a probability for what word should follow each prior word. It’s not thinking, it’s imitating.What these generative AI systems are spitting out are word salad amalgams of what it thinks the response to your prompt should look like, based on training from millions of books and webpages that have been previously published.Setting aside for a moment the accuracy of the responses these systems deliver, I am more interestedwith the cognitive stages that this technology allows us to skip past.For thousands of years we have used technology to improve our ability to manage highly complex tasks. The idea is called cognitive offloading, and it’s as simple as writing something down on a notepad or saving a contact number on your smartphone. There are pros and cons to cognitive offloading, and scientists have been digging into the phenomenon for years.As long as we have been doing it, there have been people criticizing the practice. The legendary Greek philosopher Socrates was notorious for his skepticism around the written word. He believed knowledge emerged through a dialectical process so writing itself was reductive. He even went so far as to suggestthat writing makes us dumber.

    “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”

    Wrote Plato, quoting Socrates

    Almost every technological advancement in human history can be seen to be accompanied by someone suggesting it will be damaging. Calculators have destroyed our ability to properly do math. GPS has corrupted our spatial memory. Typewriters killed handwriting. Computer word processors killed typewriters. Video killed the radio star.And what have we lost? Well, zooming in on writing, for example, a 2020 study claimed brain activity is greater when a note is handwritten as opposed to being typed on a keyboard. And then a 2021 study suggested memory retention is better when using a pen and paper versus a stylus and tablet. So there are certainly trade-offs whenever we choose to use a technological tool to offload a cognitive task.There’s an oft-told story about gonzo journalist Hunter S. Thompson. It may be apocryphal but it certainly is meaningful. He once said he sat down and typed out the entirety of The Great Gatsby, word for word. According to Thompson, he wanted to know what it felt like to write a great novel.

    Thompson was infamous for writing everything on typewriters, even when computers emerged in the 1990sPublic Domain

    I don’t want to get all wishy-washy here, but these are the brass tacks we are ultimately falling on. What does it feel like to think? What does it feel like to be creative? What does it feel like to understand something?A recent interview with Satya Nadella, CEO of Microsoft, reveals how deeply AI has infiltrated his life and work. Not only does Nadella utilize nearly a dozen different custom-designed AI agents to manage every part of his workflow – from summarizing emails to managing his schedule – but he also uses AI to get through podcasts quickly on his way to work. Instead of actually listening to the podcasts he has transcripts uploaded to an AI assistant who he then chats to about the information while commuting.Why listen to the podcast when you can get the gist through a summary? Why read a book when you can listen to the audio version at X2 speed? Or better yet, watch the movie? Or just read a Wikipedia entry. Or get AI to summarize the wikipedia entry.I’m not here to judge anyone on the way they choose to use technology. Do what you want with ChatGPT. But for a moment consider what you may be skipping over by racing from point A to point B.Sure, you can give ChatGPT a set of increasingly detailed prompts; adding complexity to its summary of a scientific journal or a podcast, but at what point do the prompts get so granular that you may as well read the journal entry itself? If you get generative AI to skim and summarize something, what is it missing? If something was worth being written then surely it is worth being read?If there is a more succinct way to say something then maybe we should say it more succinctly.In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systemsare erasing our ability to think, consider, and write. Where does it all end? For Chiang it's pretty dystopian feedback loop of dialectical slop.

    “We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?”

    Ted Chiang
    #rotting #your #brain #making #you
    AI is rotting your brain and making you stupid
    For nearly 10 years I have written about science and technology and I’ve been an early adopter of new tech for much longer. As a teenager in the mid-1990s I annoyed the hell out of my family by jamming up the phone line for hours with a dial-up modem; connecting to bulletin board communities all over the country.When I started writing professionally about technology in 2016 I was all for our seemingly inevitable transhumanist future. When the chip is ready I want it immediately stuck in my head, I remember saying proudly in our busy office. Why not improve ourselves where we can?Since then, my general view on technology has dramatically shifted. Watching a growing class of super-billionaires erode the democratizing nature of technology by maintaining corporate controls over what we use and how we use it has fundamentally changed my personal relationship with technology. Seeing deeply disturbing philosophical stances like longtermism, effective altruism, and singulartarianism envelop the minds of those rich, powerful men controlling the world has only further entrenched inequality.A recent Black Mirror episode really rammed home the perils we face by having technology so controlled by capitalist interests. A sick woman is given a brain implant connected to a cloud server to keep her alive. The system is managed through a subscription service where the user pays for monthly access to the cognitive abilities managed by the implant. As time passes, that subscription cost gets more and more expensive - and well, it’s Black Mirror, so you can imagine where things end up. Titled 'Common People', the episode is from series 7 of Black MirrorNetflix The enshittification of our digital world has been impossible to ignore. You’re not imagining things, Google Search is getting worse.But until the emergence of AII’ve never been truly concerned about a technological innovation, in and of itself.A recent article looked at how generative AI tech such as ChatGPT is being used by university students. The piece was authored by a tech admin at New York University and it’s filled with striking insights into how AI is shaking the foundations of educational institutions.Not unsurprisingly, students are using ChatGPT for everything from summarizing complex texts to completely writing essays from scratch. But one of the reflections quoted in the article immediately jumped out at me.When a student was asked why they relied on generative AI so much when putting work together they responded, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”My first response was, of course, why wouldn’t you? It made complete sense.For a second.And then I thought, hang on, what is being lost by speeding from point A to point B in a car? What if the quickest way from point A to point B wasn't the best way to get there?Depositphotos Let’s further the analogy. You need to go to the grocery store. It’s a 10-minute walk away but a three-minute drive. Why wouldn’t you drive?Well, the only benefit of driving is saving time. That’s inarguable. You’ll be back home and cooking up your dinner before the person on foot even gets to the grocery store.Congratulations. You saved yourself about 20 minutes. In a world where efficiency trumps everything this is the best choice. Use that extra 20 minutes in your day wisely.But what are the benefits of not driving, taking the extra time, and walking?First, you have environmental benefits. Not using a car unnecessarily; spewing emissions into the air, either directly from combustion or indirectly for those with electric cars.Secondly, you have health benefits from the little bit of exercise you get by walking. Our stationary lives are quite literally killing us so a 20-minute walk a day is likely to be incredibly positive for your health.But there are also more abstract benefits to be gained by walking this short trip from A to B.Walking connects us to our neighborhood. It slows things down. Helps us better understand the community and environment we are living in. A recent study summarized the benefits of walking around your neighborhood, suggesting the practice leads to greater social connectedness and reduced feelings of isolation.So what are we losing when we use a car to get from point A to point B? Potentially a great deal.But let’s move out of abstraction and into the real world.An article in the Columbia Journalism Review asked nearly 20 news media professionals how they were integrating AI into their personal workflow. The responses were wildly varied. Some journalists refused to use AI for anything more than superficial interview transcription, while others use it broadly, to edit text, answer research questions, summarize large bodies of science text, or search massive troves of data for salient bits of information.In general, the line almost all those media professionals shared was they would never explicitly use AI to write their articles. But for some, almost every other stage of the creative process in developing a story was fair game for AI assistance.I found this a little horrifying. Farming out certain creative development processes to AI felt not only ethically wrong but also like key cognitive stages were being lost, skipped over, considered unimportant.I’ve never considered myself to be an extraordinarily creative person. I don’t feel like I come up with new or original ideas when I work. Instead, I see myself more as a compiler. I enjoy finding connections between seemingly disparate things. Linking ideas and using those pieces as building blocks to create my own work. As a writer and journalist I see this process as the whole point.A good example of this is a story I published in late 2023 investigating the relationship between long Covid and psychedelics. The story began earlier in the year when I read an intriguing study linking long Covid with serotonin abnormalities in the gut. Being interested in the science of psychedelics, and knowing that psychedelics very much influence serotonin receptors, I wondered if there could be some kind of link between these two seemingly disparate topics.The idea sat in the back of my mind for several months, until I came across a person who told me they had been actively treating their own long Covid symptoms with a variety of psychedelic remedies. After an expansive and fascinating interview I started diving into different studies looking to understand how certain psychedelics affect the body, and whether there could be any associations with long Covid treatments.Eventually I stumbled across a few compelling associations. It took weeks of reading different scientific studies, speaking to various researchers, and thinking about how several discordant threads could be somehow linked.Could AI have assisted me in the process of developing this story?No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience.And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence. LLMs are a sophisticated language imitator, delivering responses that resemble what they think a response would look likeDepositphotos ChatGPT, and all the assorted clones that have emerged over the last couple of years, are a form of technology called LLMs. At the risk of enraging those who actually work in this mind-bendingly complex field, I’m going to dangerously over-simplify how these things work.It’s important to know that when you ask a system like ChatGPT a question it doesn’t understand what you are asking it. The response these systems generate to any prompt is simply a simulation of what it computes a response would look like based on a massive dataset.So if I were to ask the system a random question like, “What color are cats?”, the system would scrape the world’s trove of information on cats and colors to create a response that mirrors the way most pre-existing text talks about cats and colors. The system builds its response word by word, creating something that reads coherently to us, by establishing a probability for what word should follow each prior word. It’s not thinking, it’s imitating.What these generative AI systems are spitting out are word salad amalgams of what it thinks the response to your prompt should look like, based on training from millions of books and webpages that have been previously published.Setting aside for a moment the accuracy of the responses these systems deliver, I am more interestedwith the cognitive stages that this technology allows us to skip past.For thousands of years we have used technology to improve our ability to manage highly complex tasks. The idea is called cognitive offloading, and it’s as simple as writing something down on a notepad or saving a contact number on your smartphone. There are pros and cons to cognitive offloading, and scientists have been digging into the phenomenon for years.As long as we have been doing it, there have been people criticizing the practice. The legendary Greek philosopher Socrates was notorious for his skepticism around the written word. He believed knowledge emerged through a dialectical process so writing itself was reductive. He even went so far as to suggestthat writing makes us dumber. “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” Wrote Plato, quoting Socrates Almost every technological advancement in human history can be seen to be accompanied by someone suggesting it will be damaging. Calculators have destroyed our ability to properly do math. GPS has corrupted our spatial memory. Typewriters killed handwriting. Computer word processors killed typewriters. Video killed the radio star.And what have we lost? Well, zooming in on writing, for example, a 2020 study claimed brain activity is greater when a note is handwritten as opposed to being typed on a keyboard. And then a 2021 study suggested memory retention is better when using a pen and paper versus a stylus and tablet. So there are certainly trade-offs whenever we choose to use a technological tool to offload a cognitive task.There’s an oft-told story about gonzo journalist Hunter S. Thompson. It may be apocryphal but it certainly is meaningful. He once said he sat down and typed out the entirety of The Great Gatsby, word for word. According to Thompson, he wanted to know what it felt like to write a great novel. Thompson was infamous for writing everything on typewriters, even when computers emerged in the 1990sPublic Domain I don’t want to get all wishy-washy here, but these are the brass tacks we are ultimately falling on. What does it feel like to think? What does it feel like to be creative? What does it feel like to understand something?A recent interview with Satya Nadella, CEO of Microsoft, reveals how deeply AI has infiltrated his life and work. Not only does Nadella utilize nearly a dozen different custom-designed AI agents to manage every part of his workflow – from summarizing emails to managing his schedule – but he also uses AI to get through podcasts quickly on his way to work. Instead of actually listening to the podcasts he has transcripts uploaded to an AI assistant who he then chats to about the information while commuting.Why listen to the podcast when you can get the gist through a summary? Why read a book when you can listen to the audio version at X2 speed? Or better yet, watch the movie? Or just read a Wikipedia entry. Or get AI to summarize the wikipedia entry.I’m not here to judge anyone on the way they choose to use technology. Do what you want with ChatGPT. But for a moment consider what you may be skipping over by racing from point A to point B.Sure, you can give ChatGPT a set of increasingly detailed prompts; adding complexity to its summary of a scientific journal or a podcast, but at what point do the prompts get so granular that you may as well read the journal entry itself? If you get generative AI to skim and summarize something, what is it missing? If something was worth being written then surely it is worth being read?If there is a more succinct way to say something then maybe we should say it more succinctly.In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systemsare erasing our ability to think, consider, and write. Where does it all end? For Chiang it's pretty dystopian feedback loop of dialectical slop. “We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?” Ted Chiang #rotting #your #brain #making #you
    NEWATLAS.COM
    AI is rotting your brain and making you stupid
    For nearly 10 years I have written about science and technology and I’ve been an early adopter of new tech for much longer. As a teenager in the mid-1990s I annoyed the hell out of my family by jamming up the phone line for hours with a dial-up modem; connecting to bulletin board communities all over the country.When I started writing professionally about technology in 2016 I was all for our seemingly inevitable transhumanist future. When the chip is ready I want it immediately stuck in my head, I remember saying proudly in our busy office. Why not improve ourselves where we can?Since then, my general view on technology has dramatically shifted. Watching a growing class of super-billionaires erode the democratizing nature of technology by maintaining corporate controls over what we use and how we use it has fundamentally changed my personal relationship with technology. Seeing deeply disturbing philosophical stances like longtermism, effective altruism, and singulartarianism envelop the minds of those rich, powerful men controlling the world has only further entrenched inequality.A recent Black Mirror episode really rammed home the perils we face by having technology so controlled by capitalist interests. A sick woman is given a brain implant connected to a cloud server to keep her alive. The system is managed through a subscription service where the user pays for monthly access to the cognitive abilities managed by the implant. As time passes, that subscription cost gets more and more expensive - and well, it’s Black Mirror, so you can imagine where things end up. Titled 'Common People', the episode is from series 7 of Black MirrorNetflix The enshittification of our digital world has been impossible to ignore. You’re not imagining things, Google Search is getting worse.But until the emergence of AI (or, as we’ll discuss later, language learning models that pretend to look and sound like an artificial intelligence) I’ve never been truly concerned about a technological innovation, in and of itself.A recent article looked at how generative AI tech such as ChatGPT is being used by university students. The piece was authored by a tech admin at New York University and it’s filled with striking insights into how AI is shaking the foundations of educational institutions.Not unsurprisingly, students are using ChatGPT for everything from summarizing complex texts to completely writing essays from scratch. But one of the reflections quoted in the article immediately jumped out at me.When a student was asked why they relied on generative AI so much when putting work together they responded, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”My first response was, of course, why wouldn’t you? It made complete sense.For a second.And then I thought, hang on, what is being lost by speeding from point A to point B in a car? What if the quickest way from point A to point B wasn't the best way to get there?Depositphotos Let’s further the analogy. You need to go to the grocery store. It’s a 10-minute walk away but a three-minute drive. Why wouldn’t you drive?Well, the only benefit of driving is saving time. That’s inarguable. You’ll be back home and cooking up your dinner before the person on foot even gets to the grocery store.Congratulations. You saved yourself about 20 minutes. In a world where efficiency trumps everything this is the best choice. Use that extra 20 minutes in your day wisely.But what are the benefits of not driving, taking the extra time, and walking?First, you have environmental benefits. Not using a car unnecessarily; spewing emissions into the air, either directly from combustion or indirectly for those with electric cars.Secondly, you have health benefits from the little bit of exercise you get by walking. Our stationary lives are quite literally killing us so a 20-minute walk a day is likely to be incredibly positive for your health.But there are also more abstract benefits to be gained by walking this short trip from A to B.Walking connects us to our neighborhood. It slows things down. Helps us better understand the community and environment we are living in. A recent study summarized the benefits of walking around your neighborhood, suggesting the practice leads to greater social connectedness and reduced feelings of isolation.So what are we losing when we use a car to get from point A to point B? Potentially a great deal.But let’s move out of abstraction and into the real world.An article in the Columbia Journalism Review asked nearly 20 news media professionals how they were integrating AI into their personal workflow. The responses were wildly varied. Some journalists refused to use AI for anything more than superficial interview transcription, while others use it broadly, to edit text, answer research questions, summarize large bodies of science text, or search massive troves of data for salient bits of information.In general, the line almost all those media professionals shared was they would never explicitly use AI to write their articles. But for some, almost every other stage of the creative process in developing a story was fair game for AI assistance.I found this a little horrifying. Farming out certain creative development processes to AI felt not only ethically wrong but also like key cognitive stages were being lost, skipped over, considered unimportant.I’ve never considered myself to be an extraordinarily creative person. I don’t feel like I come up with new or original ideas when I work. Instead, I see myself more as a compiler. I enjoy finding connections between seemingly disparate things. Linking ideas and using those pieces as building blocks to create my own work. As a writer and journalist I see this process as the whole point.A good example of this is a story I published in late 2023 investigating the relationship between long Covid and psychedelics. The story began earlier in the year when I read an intriguing study linking long Covid with serotonin abnormalities in the gut. Being interested in the science of psychedelics, and knowing that psychedelics very much influence serotonin receptors, I wondered if there could be some kind of link between these two seemingly disparate topics.The idea sat in the back of my mind for several months, until I came across a person who told me they had been actively treating their own long Covid symptoms with a variety of psychedelic remedies. After an expansive and fascinating interview I started diving into different studies looking to understand how certain psychedelics affect the body, and whether there could be any associations with long Covid treatments.Eventually I stumbled across a few compelling associations. It took weeks of reading different scientific studies, speaking to various researchers, and thinking about how several discordant threads could be somehow linked.Could AI have assisted me in the process of developing this story?No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience.And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence. LLMs are a sophisticated language imitator, delivering responses that resemble what they think a response would look likeDepositphotos ChatGPT, and all the assorted clones that have emerged over the last couple of years, are a form of technology called LLMs (large language models). At the risk of enraging those who actually work in this mind-bendingly complex field, I’m going to dangerously over-simplify how these things work.It’s important to know that when you ask a system like ChatGPT a question it doesn’t understand what you are asking it. The response these systems generate to any prompt is simply a simulation of what it computes a response would look like based on a massive dataset.So if I were to ask the system a random question like, “What color are cats?”, the system would scrape the world’s trove of information on cats and colors to create a response that mirrors the way most pre-existing text talks about cats and colors. The system builds its response word by word, creating something that reads coherently to us, by establishing a probability for what word should follow each prior word. It’s not thinking, it’s imitating.What these generative AI systems are spitting out are word salad amalgams of what it thinks the response to your prompt should look like, based on training from millions of books and webpages that have been previously published.Setting aside for a moment the accuracy of the responses these systems deliver, I am more interested (or concerned) with the cognitive stages that this technology allows us to skip past.For thousands of years we have used technology to improve our ability to manage highly complex tasks. The idea is called cognitive offloading, and it’s as simple as writing something down on a notepad or saving a contact number on your smartphone. There are pros and cons to cognitive offloading, and scientists have been digging into the phenomenon for years.As long as we have been doing it, there have been people criticizing the practice. The legendary Greek philosopher Socrates was notorious for his skepticism around the written word. He believed knowledge emerged through a dialectical process so writing itself was reductive. He even went so far as to suggest (according to his student Plato, who did write things down) that writing makes us dumber. “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” Wrote Plato, quoting Socrates Almost every technological advancement in human history can be seen to be accompanied by someone suggesting it will be damaging. Calculators have destroyed our ability to properly do math. GPS has corrupted our spatial memory. Typewriters killed handwriting. Computer word processors killed typewriters. Video killed the radio star.And what have we lost? Well, zooming in on writing, for example, a 2020 study claimed brain activity is greater when a note is handwritten as opposed to being typed on a keyboard. And then a 2021 study suggested memory retention is better when using a pen and paper versus a stylus and tablet. So there are certainly trade-offs whenever we choose to use a technological tool to offload a cognitive task.There’s an oft-told story about gonzo journalist Hunter S. Thompson. It may be apocryphal but it certainly is meaningful. He once said he sat down and typed out the entirety of The Great Gatsby, word for word. According to Thompson, he wanted to know what it felt like to write a great novel. Thompson was infamous for writing everything on typewriters, even when computers emerged in the 1990sPublic Domain I don’t want to get all wishy-washy here, but these are the brass tacks we are ultimately falling on. What does it feel like to think? What does it feel like to be creative? What does it feel like to understand something?A recent interview with Satya Nadella, CEO of Microsoft, reveals how deeply AI has infiltrated his life and work. Not only does Nadella utilize nearly a dozen different custom-designed AI agents to manage every part of his workflow – from summarizing emails to managing his schedule – but he also uses AI to get through podcasts quickly on his way to work. Instead of actually listening to the podcasts he has transcripts uploaded to an AI assistant who he then chats to about the information while commuting.Why listen to the podcast when you can get the gist through a summary? Why read a book when you can listen to the audio version at X2 speed? Or better yet, watch the movie? Or just read a Wikipedia entry. Or get AI to summarize the wikipedia entry.I’m not here to judge anyone on the way they choose to use technology. Do what you want with ChatGPT. But for a moment consider what you may be skipping over by racing from point A to point B.Sure, you can give ChatGPT a set of increasingly detailed prompts; adding complexity to its summary of a scientific journal or a podcast, but at what point do the prompts get so granular that you may as well read the journal entry itself? If you get generative AI to skim and summarize something, what is it missing? If something was worth being written then surely it is worth being read?If there is a more succinct way to say something then maybe we should say it more succinctly.In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systems (or these simulations of intelligence) are erasing our ability to think, consider, and write. Where does it all end? For Chiang it's pretty dystopian feedback loop of dialectical slop. “We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?” Ted Chiang
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Google’s ‘world-model’ bet: building the AI operating layer before Microsoft captures the UI

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    After three hours at Google’s I/O 2025 event last week in Silicon Valley, it became increasingly clear: Google is rallying its formidable AI efforts – prominently branded under the Gemini name but encompassing a diverse range of underlying model architectures and research – with laser focus. It is releasing a slew of innovations and technologies around it, then integrating them into products at a breathtaking pace.
    Beyond headline-grabbing features, Google laid out a bolder ambition: an operating system for the AI age – not the disk-booting kind, but a logic layer every app could tap – a “world model” meant to power a universal assistant that understands our physical surroundings, and reasons and acts on our behalf. It’s a strategic offensive that many observers may have missed amid the bamboozlement of features. 
    On one hand, it’s a high-stakes strategy to leapfrog entrenched competitors. But on the other, as Google pours billions into this moonshot, a critical question looms: Can Google’s brilliance in AI research and technology translate into products faster than its rivals, whose edge has its own brilliance: packaging AI into immediately accessible and commercially potent products? Can Google out-maneuver a laser-focused Microsoft, fend off OpenAI’s vertical hardware dreams, and, crucially, keep its own search empire alive in the disruptive currents of AI?
    Google is already pursuing this future at dizzying scale. Pichai told I/O that the company now processes 480 trillion tokens a month – 50× more than a year ago – and almost 5x more than the 100 trillion tokens a month that Microsoft’s Satya Nadella said his company processed. This momentum is also reflected in developer adoption, with Pichai saying that over 7 million developers are now building with the Gemini API, representing a five-fold increase since the last I/O, while Gemini usage on Vertex AI has surged more than 40 times. And unit costs keep falling as Gemini 2.5 models and the Ironwood TPU squeeze more performance from each watt and dollar. AI Modeand AI Overviewsare the live test beds where Google tunes latency, quality, and future ad formats as it shifts search into an AI-first era.
    Source: Google I/O 20025
    Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google, and not other companies – creates another big tension: How much control does Google want over this all-knowing assistant, built upon its crown jewel of search? Does it primarily want to leverage it first for itself, to save its billion search business that depends on owning the starting point and avoiding disruption by OpenAI? Or will Google fully open its foundational AI for other developers and companies to leverage – another  segment representing a significant portion of its business, engaging over 20 million developers, more than any other company? 
    It has sometimes stopped short of a radical focus on building these core products for others with the same clarity as its nemesis, Microsoft. That’s because it keeps a lot of core functionality reserved for its cherished search engine. That said, Google is making significant efforts to provide developer access wherever possible. A telling example is Project Mariner. Google could have embedded the agentic browser-automation features directly inside Chrome, giving consumers an immediate showcase under Google’s full control. However, Google followed up by saying Mariner’s computer-use capabilities would be released via the Gemini API more broadly “this summer.” This signals that external access is coming for any rival that wants comparable automation. In fact, Google said partners Automation Anywhere and UiPath were already building with it.
    Google’s grand design: the ‘world model’ and universal assistant
    The clearest articulation of Google’s grand design came from Demis Hassabis, CEO of Google DeepMind, during the I/O keynote. He stated Google continued to “double down” on efforts towards artificial general intelligence. While Gemini was already “the best multimodal model,” Hassabis explained, Google is working hard to “extend it to become what we call a world model. That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.” 
    This concept of ‘a world model,’ as articulated by Hassabis, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems.
    Hassabis has developed this concept of a “world model” and its manifestation as a “universal AI assistant” in several talks since late 2024, and it was presented at I/O most comprehensively – with CEO Sundar Pichai and Gemini lead Josh Woodward echoing the vision on the same stage.Speaking about the Gemini app, Google’s equivalent to OpenAI’s ChatGPT, Hassabis declared, “This is our ultimate vision for the Gemini app, to transform it into a universal AI assistant, an AI that’s personal, proactive, and powerful, and one of our key milestones on the road to AGI.” 
    This vision was made tangible through I/O demonstrations. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.”
    CEO Sundar Pichai reinforced this, citing Project Astra which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context”enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understandsform the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google is stuffing these capabilities into a crowded toolkit: AI Studio and Firebase Studio are core starting points for developers, while Vertex AI remains the enterprise on-ramp.
    The strategic stakes: defending search, courting developers amid an AI arms race
    This colossal undertaking is driven by Google’s massive R&D capabilities but also by strategic necessity. In the enterprise software landscape, Microsoft has a formidable hold, a Fortune 500 Chief AI Officer told VentureBeat, reassuring customers with its full commitment to tooling Copilot. The executive requested anonymity because of the sensitivity of commenting on the intense competition between the AI cloud providers. Microsoft’s dominance in Office 365 productivity applications will be exceptionally hard to dislodge through direct feature-for-feature competition, the executive said.
    Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology. As Pichai mused with podcaster David Friedberg shortly before I/O, that means awareness of physical surroundings. And so AR glasses, Pichai said, “maybe that’s the next leap…that’s what’s exciting for me.”
    But this AI offensive is a race against multiple clocks. First, the billion search-ads engine that funds Google must be protected even as it is reinvented. The U.S. Department of Justice’s monopolization ruling still hangs over Google – divestiture of Chrome has been floated as the leading remedy. And in Europe, the Digital Markets Act as well as emerging copyright-liability lawsuits could hem in how freely Gemini crawls or displays the open web.
    Finally, execution speed matters. Google has been criticized for moving slowly in past years. But over the past 12 months, it became clear Google had been working patiently on multiple fronts, and that it has paid off with faster growth than rivals. The challenge of successfully navigating this AI transition at massive scale is immense, as evidenced by the recent Bloomberg report detailing how even a tech titan like Apple is grappling with significant setbacks and internal reorganizations in its AI initiatives. This industry-wide difficulty underscores the high stakes for all players. While Pichai lacks the showmanship of some rivals, the long list of enterprise customer testimonials Google paraded at its Cloud Next event last month – about actual AI deployments – underscores a leader who lets sustained product cadence and enterprise wins speak for themselves. 
    At the same time, focused competitors advance. Microsoft’s enterprise march continues. Its Build conference showcased Microsoft 365 Copilot as the “UI for AI,” Azure AI Foundry as a “production line for intelligence,” and Copilot Studio for sophisticated agent-building, with impressive low-code workflow demos. Nadella’s “open agentic web” visionoffers businesses a pragmatic AI adoption path, allowing selective integration of AI tech – whether it be Google’s or another competitor’s – within a Microsoft-centric framework.
    OpenAI, meanwhile, is way out ahead with the consumer reach of its ChatGPT product, with recent references by the company to having 600 million monthly users, and 800 million weekly users. This compares to the Gemini app’s 400 million monthly users. And in December, OpenAI launched a full-blown search offering, and is reportedly planning an ad offering – posing what could be an existential threat to Google’s search model. Beyond making leading models, OpenAI is making a provocative vertical play with its reported billion acquisition of Jony Ive’s IO, pledging to move “beyond these legacy products” – and hinting that it was launching a hardware product that would attempt to disrupt AI just like the iPhone disrupted mobile. While any of this may potentially disrupt Google’s next-gen personal computing ambitions, it’s also true that OpenAI’s ability to build a deep moat like Apple did with the iPhone may be limited in an AI era increasingly defined by open protocolsand easier model interchangeability.
    Internally, Google navigates its vast ecosystem. As Jeanine Banks, Google’s VP of Developer X, told VentureBeat serving Google’s diverse global developer community means “it’s not a one size fits all,” leading to a rich but sometimes complex array of tools – AI Studio, Vertex AI, Firebase Studio, numerous APIs.
    Meanwhile, Amazon is pressing from another flank: Bedrock already hosts Anthropic, Meta, Mistral and Cohere models, giving AWS customers a pragmatic, multi-model default.
    For enterprise decision-makers: navigating Google’s ‘world model’ future
    Google’s audacious bid to build the foundational intelligence for the AI age presents enterprise leaders with compelling opportunities and critical considerations:

    Move now or retrofit later: Falling a release cycle behind could force costly rewrites when assistant-first interfaces become default.
    Tap into revolutionary potential: For organizations seeking to embrace the most powerful AI, leveraging Google’s “world model” research, multimodal capabilities, and the AGI trajectory promised by Google offers a path to potentially significant innovation.
    Prepare for a new interaction paradigm: Success for Google’s “universal assistant” would mean a primary new interface for services and data. Enterprises should strategize for integration via APIs and agentic frameworks for context-aware delivery.
    Factor in the long game: Aligning with Google’s vision is a long-term commitment. The full “world model” and AGI are potentially distant horizons. Decision-makers must balance this with immediate needs and platform complexities.
    Contrast with focused alternatives: Pragmatic solutions from Microsoft offer tangible enterprise productivity now. Disruptive hardware-AI from OpenAI/IO presents another distinct path. A diversified strategy, leveraging the best of each, often makes sense, especially with the increasingly open agentic web allowing for such flexibility.

    These complex choices and real-world AI adoption strategies will be central to discussions at VentureBeat’s Transform 2025 next month. The leading independent event brings enterprise technical decision-makers together with leaders from pioneering companies to share firsthand experiences on platform choices – Google, Microsoft, and beyond – and navigating AI deployment, all curated by the VentureBeat editorial team. With limited seating, early registration is encouraged.
    Google’s defining offensive: shaping the future or strategic overreach?
    Google’s I/O spectacle was a strong statement: Google signalled that it intends to architect and operate the foundational intelligence of the AI-driven future. Its pursuit of a “world model” and its AGI ambitions aim to redefine computing, outflank competitors, and secure its dominance. The audacity is compelling; the technological promise is immense.
    The big question is execution and timing. Can Google innovate and integrate its vast technologies into a cohesive, compelling experience faster than rivals solidify their positions? Can it do so while transforming search and navigating regulatory challenges? And can it do so while focused so broadly on both consumers and business – an agenda that is arguably much broader than that of its key competitors?
    The next few years will be pivotal. If Google delivers on its “world model” vision, it may usher in an era of personalized, ambient intelligence, effectively becoming the new operational layer for our digital lives. If not, its grand ambition could be a cautionary tale of a giant reaching for everything, only to find the future defined by others who aimed more specifically, more quickly. 

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #googles #worldmodel #bet #building #operating
    Google’s ‘world-model’ bet: building the AI operating layer before Microsoft captures the UI
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More After three hours at Google’s I/O 2025 event last week in Silicon Valley, it became increasingly clear: Google is rallying its formidable AI efforts – prominently branded under the Gemini name but encompassing a diverse range of underlying model architectures and research – with laser focus. It is releasing a slew of innovations and technologies around it, then integrating them into products at a breathtaking pace. Beyond headline-grabbing features, Google laid out a bolder ambition: an operating system for the AI age – not the disk-booting kind, but a logic layer every app could tap – a “world model” meant to power a universal assistant that understands our physical surroundings, and reasons and acts on our behalf. It’s a strategic offensive that many observers may have missed amid the bamboozlement of features.  On one hand, it’s a high-stakes strategy to leapfrog entrenched competitors. But on the other, as Google pours billions into this moonshot, a critical question looms: Can Google’s brilliance in AI research and technology translate into products faster than its rivals, whose edge has its own brilliance: packaging AI into immediately accessible and commercially potent products? Can Google out-maneuver a laser-focused Microsoft, fend off OpenAI’s vertical hardware dreams, and, crucially, keep its own search empire alive in the disruptive currents of AI? Google is already pursuing this future at dizzying scale. Pichai told I/O that the company now processes 480 trillion tokens a month – 50× more than a year ago – and almost 5x more than the 100 trillion tokens a month that Microsoft’s Satya Nadella said his company processed. This momentum is also reflected in developer adoption, with Pichai saying that over 7 million developers are now building with the Gemini API, representing a five-fold increase since the last I/O, while Gemini usage on Vertex AI has surged more than 40 times. And unit costs keep falling as Gemini 2.5 models and the Ironwood TPU squeeze more performance from each watt and dollar. AI Modeand AI Overviewsare the live test beds where Google tunes latency, quality, and future ad formats as it shifts search into an AI-first era. Source: Google I/O 20025 Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google, and not other companies – creates another big tension: How much control does Google want over this all-knowing assistant, built upon its crown jewel of search? Does it primarily want to leverage it first for itself, to save its billion search business that depends on owning the starting point and avoiding disruption by OpenAI? Or will Google fully open its foundational AI for other developers and companies to leverage – another  segment representing a significant portion of its business, engaging over 20 million developers, more than any other company?  It has sometimes stopped short of a radical focus on building these core products for others with the same clarity as its nemesis, Microsoft. That’s because it keeps a lot of core functionality reserved for its cherished search engine. That said, Google is making significant efforts to provide developer access wherever possible. A telling example is Project Mariner. Google could have embedded the agentic browser-automation features directly inside Chrome, giving consumers an immediate showcase under Google’s full control. However, Google followed up by saying Mariner’s computer-use capabilities would be released via the Gemini API more broadly “this summer.” This signals that external access is coming for any rival that wants comparable automation. In fact, Google said partners Automation Anywhere and UiPath were already building with it. Google’s grand design: the ‘world model’ and universal assistant The clearest articulation of Google’s grand design came from Demis Hassabis, CEO of Google DeepMind, during the I/O keynote. He stated Google continued to “double down” on efforts towards artificial general intelligence. While Gemini was already “the best multimodal model,” Hassabis explained, Google is working hard to “extend it to become what we call a world model. That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.”  This concept of ‘a world model,’ as articulated by Hassabis, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems. Hassabis has developed this concept of a “world model” and its manifestation as a “universal AI assistant” in several talks since late 2024, and it was presented at I/O most comprehensively – with CEO Sundar Pichai and Gemini lead Josh Woodward echoing the vision on the same stage.Speaking about the Gemini app, Google’s equivalent to OpenAI’s ChatGPT, Hassabis declared, “This is our ultimate vision for the Gemini app, to transform it into a universal AI assistant, an AI that’s personal, proactive, and powerful, and one of our key milestones on the road to AGI.”  This vision was made tangible through I/O demonstrations. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.” CEO Sundar Pichai reinforced this, citing Project Astra which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context”enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understandsform the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google is stuffing these capabilities into a crowded toolkit: AI Studio and Firebase Studio are core starting points for developers, while Vertex AI remains the enterprise on-ramp. The strategic stakes: defending search, courting developers amid an AI arms race This colossal undertaking is driven by Google’s massive R&D capabilities but also by strategic necessity. In the enterprise software landscape, Microsoft has a formidable hold, a Fortune 500 Chief AI Officer told VentureBeat, reassuring customers with its full commitment to tooling Copilot. The executive requested anonymity because of the sensitivity of commenting on the intense competition between the AI cloud providers. Microsoft’s dominance in Office 365 productivity applications will be exceptionally hard to dislodge through direct feature-for-feature competition, the executive said. Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology. As Pichai mused with podcaster David Friedberg shortly before I/O, that means awareness of physical surroundings. And so AR glasses, Pichai said, “maybe that’s the next leap…that’s what’s exciting for me.” But this AI offensive is a race against multiple clocks. First, the billion search-ads engine that funds Google must be protected even as it is reinvented. The U.S. Department of Justice’s monopolization ruling still hangs over Google – divestiture of Chrome has been floated as the leading remedy. And in Europe, the Digital Markets Act as well as emerging copyright-liability lawsuits could hem in how freely Gemini crawls or displays the open web. Finally, execution speed matters. Google has been criticized for moving slowly in past years. But over the past 12 months, it became clear Google had been working patiently on multiple fronts, and that it has paid off with faster growth than rivals. The challenge of successfully navigating this AI transition at massive scale is immense, as evidenced by the recent Bloomberg report detailing how even a tech titan like Apple is grappling with significant setbacks and internal reorganizations in its AI initiatives. This industry-wide difficulty underscores the high stakes for all players. While Pichai lacks the showmanship of some rivals, the long list of enterprise customer testimonials Google paraded at its Cloud Next event last month – about actual AI deployments – underscores a leader who lets sustained product cadence and enterprise wins speak for themselves.  At the same time, focused competitors advance. Microsoft’s enterprise march continues. Its Build conference showcased Microsoft 365 Copilot as the “UI for AI,” Azure AI Foundry as a “production line for intelligence,” and Copilot Studio for sophisticated agent-building, with impressive low-code workflow demos. Nadella’s “open agentic web” visionoffers businesses a pragmatic AI adoption path, allowing selective integration of AI tech – whether it be Google’s or another competitor’s – within a Microsoft-centric framework. OpenAI, meanwhile, is way out ahead with the consumer reach of its ChatGPT product, with recent references by the company to having 600 million monthly users, and 800 million weekly users. This compares to the Gemini app’s 400 million monthly users. And in December, OpenAI launched a full-blown search offering, and is reportedly planning an ad offering – posing what could be an existential threat to Google’s search model. Beyond making leading models, OpenAI is making a provocative vertical play with its reported billion acquisition of Jony Ive’s IO, pledging to move “beyond these legacy products” – and hinting that it was launching a hardware product that would attempt to disrupt AI just like the iPhone disrupted mobile. While any of this may potentially disrupt Google’s next-gen personal computing ambitions, it’s also true that OpenAI’s ability to build a deep moat like Apple did with the iPhone may be limited in an AI era increasingly defined by open protocolsand easier model interchangeability. Internally, Google navigates its vast ecosystem. As Jeanine Banks, Google’s VP of Developer X, told VentureBeat serving Google’s diverse global developer community means “it’s not a one size fits all,” leading to a rich but sometimes complex array of tools – AI Studio, Vertex AI, Firebase Studio, numerous APIs. Meanwhile, Amazon is pressing from another flank: Bedrock already hosts Anthropic, Meta, Mistral and Cohere models, giving AWS customers a pragmatic, multi-model default. For enterprise decision-makers: navigating Google’s ‘world model’ future Google’s audacious bid to build the foundational intelligence for the AI age presents enterprise leaders with compelling opportunities and critical considerations: Move now or retrofit later: Falling a release cycle behind could force costly rewrites when assistant-first interfaces become default. Tap into revolutionary potential: For organizations seeking to embrace the most powerful AI, leveraging Google’s “world model” research, multimodal capabilities, and the AGI trajectory promised by Google offers a path to potentially significant innovation. Prepare for a new interaction paradigm: Success for Google’s “universal assistant” would mean a primary new interface for services and data. Enterprises should strategize for integration via APIs and agentic frameworks for context-aware delivery. Factor in the long game: Aligning with Google’s vision is a long-term commitment. The full “world model” and AGI are potentially distant horizons. Decision-makers must balance this with immediate needs and platform complexities. Contrast with focused alternatives: Pragmatic solutions from Microsoft offer tangible enterprise productivity now. Disruptive hardware-AI from OpenAI/IO presents another distinct path. A diversified strategy, leveraging the best of each, often makes sense, especially with the increasingly open agentic web allowing for such flexibility. These complex choices and real-world AI adoption strategies will be central to discussions at VentureBeat’s Transform 2025 next month. The leading independent event brings enterprise technical decision-makers together with leaders from pioneering companies to share firsthand experiences on platform choices – Google, Microsoft, and beyond – and navigating AI deployment, all curated by the VentureBeat editorial team. With limited seating, early registration is encouraged. Google’s defining offensive: shaping the future or strategic overreach? Google’s I/O spectacle was a strong statement: Google signalled that it intends to architect and operate the foundational intelligence of the AI-driven future. Its pursuit of a “world model” and its AGI ambitions aim to redefine computing, outflank competitors, and secure its dominance. The audacity is compelling; the technological promise is immense. The big question is execution and timing. Can Google innovate and integrate its vast technologies into a cohesive, compelling experience faster than rivals solidify their positions? Can it do so while transforming search and navigating regulatory challenges? And can it do so while focused so broadly on both consumers and business – an agenda that is arguably much broader than that of its key competitors? The next few years will be pivotal. If Google delivers on its “world model” vision, it may usher in an era of personalized, ambient intelligence, effectively becoming the new operational layer for our digital lives. If not, its grand ambition could be a cautionary tale of a giant reaching for everything, only to find the future defined by others who aimed more specifically, more quickly.  Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #googles #worldmodel #bet #building #operating
    VENTUREBEAT.COM
    Google’s ‘world-model’ bet: building the AI operating layer before Microsoft captures the UI
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More After three hours at Google’s I/O 2025 event last week in Silicon Valley, it became increasingly clear: Google is rallying its formidable AI efforts – prominently branded under the Gemini name but encompassing a diverse range of underlying model architectures and research – with laser focus. It is releasing a slew of innovations and technologies around it, then integrating them into products at a breathtaking pace. Beyond headline-grabbing features, Google laid out a bolder ambition: an operating system for the AI age – not the disk-booting kind, but a logic layer every app could tap – a “world model” meant to power a universal assistant that understands our physical surroundings, and reasons and acts on our behalf. It’s a strategic offensive that many observers may have missed amid the bamboozlement of features.  On one hand, it’s a high-stakes strategy to leapfrog entrenched competitors. But on the other, as Google pours billions into this moonshot, a critical question looms: Can Google’s brilliance in AI research and technology translate into products faster than its rivals, whose edge has its own brilliance: packaging AI into immediately accessible and commercially potent products? Can Google out-maneuver a laser-focused Microsoft, fend off OpenAI’s vertical hardware dreams, and, crucially, keep its own search empire alive in the disruptive currents of AI? Google is already pursuing this future at dizzying scale. Pichai told I/O that the company now processes 480 trillion tokens a month – 50× more than a year ago – and almost 5x more than the 100 trillion tokens a month that Microsoft’s Satya Nadella said his company processed. This momentum is also reflected in developer adoption, with Pichai saying that over 7 million developers are now building with the Gemini API, representing a five-fold increase since the last I/O, while Gemini usage on Vertex AI has surged more than 40 times. And unit costs keep falling as Gemini 2.5 models and the Ironwood TPU squeeze more performance from each watt and dollar. AI Mode (rolling out in the U.S.) and AI Overviews (already serving 1.5 billion users monthly) are the live test beds where Google tunes latency, quality, and future ad formats as it shifts search into an AI-first era. Source: Google I/O 20025 Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google, and not other companies – creates another big tension: How much control does Google want over this all-knowing assistant, built upon its crown jewel of search? Does it primarily want to leverage it first for itself, to save its $200 billion search business that depends on owning the starting point and avoiding disruption by OpenAI? Or will Google fully open its foundational AI for other developers and companies to leverage – another  segment representing a significant portion of its business, engaging over 20 million developers, more than any other company?  It has sometimes stopped short of a radical focus on building these core products for others with the same clarity as its nemesis, Microsoft. That’s because it keeps a lot of core functionality reserved for its cherished search engine. That said, Google is making significant efforts to provide developer access wherever possible. A telling example is Project Mariner. Google could have embedded the agentic browser-automation features directly inside Chrome, giving consumers an immediate showcase under Google’s full control. However, Google followed up by saying Mariner’s computer-use capabilities would be released via the Gemini API more broadly “this summer.” This signals that external access is coming for any rival that wants comparable automation. In fact, Google said partners Automation Anywhere and UiPath were already building with it. Google’s grand design: the ‘world model’ and universal assistant The clearest articulation of Google’s grand design came from Demis Hassabis, CEO of Google DeepMind, during the I/O keynote. He stated Google continued to “double down” on efforts towards artificial general intelligence (AGI). While Gemini was already “the best multimodal model,” Hassabis explained, Google is working hard to “extend it to become what we call a world model. That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.”  This concept of ‘a world model,’ as articulated by Hassabis, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems. Hassabis has developed this concept of a “world model” and its manifestation as a “universal AI assistant” in several talks since late 2024, and it was presented at I/O most comprehensively – with CEO Sundar Pichai and Gemini lead Josh Woodward echoing the vision on the same stage. (While other AI leaders, including Microsoft’s Satya Nadella, OpenAI’s Sam Altman, and xAI’s Elon Musk have all discussed ‘world models,” Google uniquely and most comprehensively ties this foundational concept to its near-term strategic thrust: the ‘universal AI assistant.) Speaking about the Gemini app, Google’s equivalent to OpenAI’s ChatGPT, Hassabis declared, “This is our ultimate vision for the Gemini app, to transform it into a universal AI assistant, an AI that’s personal, proactive, and powerful, and one of our key milestones on the road to AGI.”  This vision was made tangible through I/O demonstrations. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.” CEO Sundar Pichai reinforced this, citing Project Astra which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context” (connecting search history, and soon Gmail/Calendar) enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understands (e.g., thermodynamics explained via cycling. This, Woodward emphasized, is “where we’re headed with Gemini,” enabled by the Gemini 2.5 Pro model allowing users to “think things into existence.”  The new developer tools unveiled at I/O are building blocks. Gemini 2.5 Pro with “Deep Think” and the hyper-efficient 2.5 Flash (now with native audio and URL context grounding from Gemini API) form the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google is stuffing these capabilities into a crowded toolkit: AI Studio and Firebase Studio are core starting points for developers, while Vertex AI remains the enterprise on-ramp. The strategic stakes: defending search, courting developers amid an AI arms race This colossal undertaking is driven by Google’s massive R&D capabilities but also by strategic necessity. In the enterprise software landscape, Microsoft has a formidable hold, a Fortune 500 Chief AI Officer told VentureBeat, reassuring customers with its full commitment to tooling Copilot. The executive requested anonymity because of the sensitivity of commenting on the intense competition between the AI cloud providers. Microsoft’s dominance in Office 365 productivity applications will be exceptionally hard to dislodge through direct feature-for-feature competition, the executive said. Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology. As Pichai mused with podcaster David Friedberg shortly before I/O, that means awareness of physical surroundings. And so AR glasses, Pichai said, “maybe that’s the next leap…that’s what’s exciting for me.” But this AI offensive is a race against multiple clocks. First, the $200 billion search-ads engine that funds Google must be protected even as it is reinvented. The U.S. Department of Justice’s monopolization ruling still hangs over Google – divestiture of Chrome has been floated as the leading remedy. And in Europe, the Digital Markets Act as well as emerging copyright-liability lawsuits could hem in how freely Gemini crawls or displays the open web. Finally, execution speed matters. Google has been criticized for moving slowly in past years. But over the past 12 months, it became clear Google had been working patiently on multiple fronts, and that it has paid off with faster growth than rivals. The challenge of successfully navigating this AI transition at massive scale is immense, as evidenced by the recent Bloomberg report detailing how even a tech titan like Apple is grappling with significant setbacks and internal reorganizations in its AI initiatives. This industry-wide difficulty underscores the high stakes for all players. While Pichai lacks the showmanship of some rivals, the long list of enterprise customer testimonials Google paraded at its Cloud Next event last month – about actual AI deployments – underscores a leader who lets sustained product cadence and enterprise wins speak for themselves.  At the same time, focused competitors advance. Microsoft’s enterprise march continues. Its Build conference showcased Microsoft 365 Copilot as the “UI for AI,” Azure AI Foundry as a “production line for intelligence,” and Copilot Studio for sophisticated agent-building, with impressive low-code workflow demos (Microsoft Build Keynote, Miti Joshi at 22:52, Kadesha Kerr at 51:26). Nadella’s “open agentic web” vision (NLWeb, MCP) offers businesses a pragmatic AI adoption path, allowing selective integration of AI tech – whether it be Google’s or another competitor’s – within a Microsoft-centric framework. OpenAI, meanwhile, is way out ahead with the consumer reach of its ChatGPT product, with recent references by the company to having 600 million monthly users, and 800 million weekly users. This compares to the Gemini app’s 400 million monthly users. And in December, OpenAI launched a full-blown search offering, and is reportedly planning an ad offering – posing what could be an existential threat to Google’s search model. Beyond making leading models, OpenAI is making a provocative vertical play with its reported $6.5 billion acquisition of Jony Ive’s IO, pledging to move “beyond these legacy products” – and hinting that it was launching a hardware product that would attempt to disrupt AI just like the iPhone disrupted mobile. While any of this may potentially disrupt Google’s next-gen personal computing ambitions, it’s also true that OpenAI’s ability to build a deep moat like Apple did with the iPhone may be limited in an AI era increasingly defined by open protocols (like MCP) and easier model interchangeability. Internally, Google navigates its vast ecosystem. As Jeanine Banks, Google’s VP of Developer X, told VentureBeat serving Google’s diverse global developer community means “it’s not a one size fits all,” leading to a rich but sometimes complex array of tools – AI Studio, Vertex AI, Firebase Studio, numerous APIs. Meanwhile, Amazon is pressing from another flank: Bedrock already hosts Anthropic, Meta, Mistral and Cohere models, giving AWS customers a pragmatic, multi-model default. For enterprise decision-makers: navigating Google’s ‘world model’ future Google’s audacious bid to build the foundational intelligence for the AI age presents enterprise leaders with compelling opportunities and critical considerations: Move now or retrofit later: Falling a release cycle behind could force costly rewrites when assistant-first interfaces become default. Tap into revolutionary potential: For organizations seeking to embrace the most powerful AI, leveraging Google’s “world model” research, multimodal capabilities (like Veo 3 and Imagen 4 showcased by Woodward at I/O), and the AGI trajectory promised by Google offers a path to potentially significant innovation. Prepare for a new interaction paradigm: Success for Google’s “universal assistant” would mean a primary new interface for services and data. Enterprises should strategize for integration via APIs and agentic frameworks for context-aware delivery. Factor in the long game (and its risks): Aligning with Google’s vision is a long-term commitment. The full “world model” and AGI are potentially distant horizons. Decision-makers must balance this with immediate needs and platform complexities. Contrast with focused alternatives: Pragmatic solutions from Microsoft offer tangible enterprise productivity now. Disruptive hardware-AI from OpenAI/IO presents another distinct path. A diversified strategy, leveraging the best of each, often makes sense, especially with the increasingly open agentic web allowing for such flexibility. These complex choices and real-world AI adoption strategies will be central to discussions at VentureBeat’s Transform 2025 next month. The leading independent event brings enterprise technical decision-makers together with leaders from pioneering companies to share firsthand experiences on platform choices – Google, Microsoft, and beyond – and navigating AI deployment, all curated by the VentureBeat editorial team. With limited seating, early registration is encouraged. Google’s defining offensive: shaping the future or strategic overreach? Google’s I/O spectacle was a strong statement: Google signalled that it intends to architect and operate the foundational intelligence of the AI-driven future. Its pursuit of a “world model” and its AGI ambitions aim to redefine computing, outflank competitors, and secure its dominance. The audacity is compelling; the technological promise is immense. The big question is execution and timing. Can Google innovate and integrate its vast technologies into a cohesive, compelling experience faster than rivals solidify their positions? Can it do so while transforming search and navigating regulatory challenges? And can it do so while focused so broadly on both consumers and business – an agenda that is arguably much broader than that of its key competitors? The next few years will be pivotal. If Google delivers on its “world model” vision, it may usher in an era of personalized, ambient intelligence, effectively becoming the new operational layer for our digital lives. If not, its grand ambition could be a cautionary tale of a giant reaching for everything, only to find the future defined by others who aimed more specifically, more quickly.  Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Microsoft Adds Gen AI Features to Paint, Snipping Tool, and Notepad

    Microsoft has added a slew of new generative AI features to Paint, Snipping Tool, and Notepad, courtesy of Microsoft Copilot. But unfortunately for many, some of these new features are only available for those with Copilot-compatible Windows machines.If you’re still using Microsoft Paint, you can use the new AI-powered feature to create custom stickers by simply typing in a prompt, though you'll need a Copilot compatible device to get it to work. To give the new feature a test drive, you’ll need to click the Sticker Generator button in the Copilot menu. From there, you can type in a description of the sticker you want to create—for example, “monkey wearing a suit”—and hit the Generate button.Paint will then generate a set of unique stickers based on your prompt. If you then fall in love with your new AI sticker, you can access all your recently generated stickers by tapping the new Stickers option in the Paint toolbar.Copilot can also now help you slim down the amount of time it takes to edit your clippings. The new feature, Perfect Screenshot, will resize your clipping based on the content in your selection using AI. You can enable the feature by holding the Ctrl keyboard shortcut after activating the Snipping Tool while selecting the region of your screen you want to capture. Unfortunately, Perfect Screenshot in Snipping Tool will be available only on Copilot+ PCs.Recommended by Our EditorsIn addition, you can now write new content in Notepad using generative AI by entering a prompt. You'll need to place your cursor where you want to insert new text, or select the content you’d like to use as a reference. Then right-click and choose Write, select Write from the Copilot menu, or use the Ctrl + Q keyboard shortcut. You'll then need to enter your instruction into the dialog and click Send. The AI-generated output will appear directly on the canvas.You can select Keep Text to add it to your document or Discard if it doesn’t fit your needs. You can also continue refining the output by entering follow-up prompts to evolve your draft further. But to use write, you'll need to make sure you have enough of Microsoft's new AI credits.But these new features may ultimately seem small fry compared with what Microsoft says it has lined up for the future. In a keynote at Microsoft’s Build conference earlier this week, CEO Satya Nadella made some hugely ambitious promises that AI is ready to start transforming the experiences of Microsoft users, while announcing new AI-focused tools for developers. Nadella said the tech world is in the middle of “another platform shift,” equivalent to 1991, when Win32 developer tools were rolling out, or 1996, when a variety of companies built new development tools designed for the internet.
    #microsoft #adds #gen #features #paint
    Microsoft Adds Gen AI Features to Paint, Snipping Tool, and Notepad
    Microsoft has added a slew of new generative AI features to Paint, Snipping Tool, and Notepad, courtesy of Microsoft Copilot. But unfortunately for many, some of these new features are only available for those with Copilot-compatible Windows machines.If you’re still using Microsoft Paint, you can use the new AI-powered feature to create custom stickers by simply typing in a prompt, though you'll need a Copilot compatible device to get it to work. To give the new feature a test drive, you’ll need to click the Sticker Generator button in the Copilot menu. From there, you can type in a description of the sticker you want to create—for example, “monkey wearing a suit”—and hit the Generate button.Paint will then generate a set of unique stickers based on your prompt. If you then fall in love with your new AI sticker, you can access all your recently generated stickers by tapping the new Stickers option in the Paint toolbar.Copilot can also now help you slim down the amount of time it takes to edit your clippings. The new feature, Perfect Screenshot, will resize your clipping based on the content in your selection using AI. You can enable the feature by holding the Ctrl keyboard shortcut after activating the Snipping Tool while selecting the region of your screen you want to capture. Unfortunately, Perfect Screenshot in Snipping Tool will be available only on Copilot+ PCs.Recommended by Our EditorsIn addition, you can now write new content in Notepad using generative AI by entering a prompt. You'll need to place your cursor where you want to insert new text, or select the content you’d like to use as a reference. Then right-click and choose Write, select Write from the Copilot menu, or use the Ctrl + Q keyboard shortcut. You'll then need to enter your instruction into the dialog and click Send. The AI-generated output will appear directly on the canvas.You can select Keep Text to add it to your document or Discard if it doesn’t fit your needs. You can also continue refining the output by entering follow-up prompts to evolve your draft further. But to use write, you'll need to make sure you have enough of Microsoft's new AI credits.But these new features may ultimately seem small fry compared with what Microsoft says it has lined up for the future. In a keynote at Microsoft’s Build conference earlier this week, CEO Satya Nadella made some hugely ambitious promises that AI is ready to start transforming the experiences of Microsoft users, while announcing new AI-focused tools for developers. Nadella said the tech world is in the middle of “another platform shift,” equivalent to 1991, when Win32 developer tools were rolling out, or 1996, when a variety of companies built new development tools designed for the internet. #microsoft #adds #gen #features #paint
    ME.PCMAG.COM
    Microsoft Adds Gen AI Features to Paint, Snipping Tool, and Notepad
    Microsoft has added a slew of new generative AI features to Paint, Snipping Tool, and Notepad, courtesy of Microsoft Copilot. But unfortunately for many, some of these new features are only available for those with Copilot-compatible Windows machines.If you’re still using Microsoft Paint, you can use the new AI-powered feature to create custom stickers by simply typing in a prompt, though you'll need a Copilot compatible device to get it to work. To give the new feature a test drive, you’ll need to click the Sticker Generator button in the Copilot menu. From there, you can type in a description of the sticker you want to create—for example, “monkey wearing a suit”—and hit the Generate button.Paint will then generate a set of unique stickers based on your prompt. If you then fall in love with your new AI sticker, you can access all your recently generated stickers by tapping the new Stickers option in the Paint toolbar.Copilot can also now help you slim down the amount of time it takes to edit your clippings. The new feature, Perfect Screenshot, will resize your clipping based on the content in your selection using AI. You can enable the feature by holding the Ctrl keyboard shortcut after activating the Snipping Tool while selecting the region of your screen you want to capture. Unfortunately, Perfect Screenshot in Snipping Tool will be available only on Copilot+ PCs.Recommended by Our EditorsIn addition, you can now write new content in Notepad using generative AI by entering a prompt. You'll need to place your cursor where you want to insert new text, or select the content you’d like to use as a reference. Then right-click and choose Write, select Write from the Copilot menu, or use the Ctrl + Q keyboard shortcut. You'll then need to enter your instruction into the dialog and click Send. The AI-generated output will appear directly on the canvas.You can select Keep Text to add it to your document or Discard if it doesn’t fit your needs. You can also continue refining the output by entering follow-up prompts to evolve your draft further. But to use write, you'll need to make sure you have enough of Microsoft's new AI credits.But these new features may ultimately seem small fry compared with what Microsoft says it has lined up for the future. In a keynote at Microsoft’s Build conference earlier this week, CEO Satya Nadella made some hugely ambitious promises that AI is ready to start transforming the experiences of Microsoft users, while announcing new AI-focused tools for developers. Nadella said the tech world is in the middle of “another platform shift,” equivalent to 1991, when Win32 developer tools were rolling out, or 1996, when a variety of companies built new development tools designed for the internet.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Tesla is going all in to finish first in the robotaxi race

    Lloyd Lee/BI

    2025-05-25T10:37:01Z

    d

    Read in app

    This story is available exclusively to Business Insider
    subscribers. Become an Insider
    and start reading now.
    Have an account?

    This post originally appeared in the BI Today newsletter.
    You can sign up for Business Insider's daily newsletter here.

    Welcome back to our Sunday edition, where we round up some of our top stories and take you inside our newsroom. This week, BI's Polly Thompson took an inside look at how artificial intelligence is set to upend a pillar of the white-collar world: the Big Four.On the agenda today:Many millennials face a cursed inheritance with their parents' homes.Internal memos reveal how an ex-Facebook exec leads Microsoft's new AI unit.Losing faith in the ROI of college, Gen Z is pivoting to blue-collar jobs.Wall Street bigwigs are questioning the safety of government bonds. Now what?But first: Tesla's robotaxis are taking the wheel.If this was forwarded to you, sign up here. Download Business Insider's app here.This week's dispatch

    Robin Marchant/Getty, Sean Gallup/Getty, Tyler Le/BI

    Tesla's big betI remain in awe of self-driving cars.I took my first Waymo earlier this year in San Francisco. Like any newbie, I immediately pulled out my phone, recorded the ride, and then gleefully shared videos with friends and family.The market for robotaxis is well beyond the shock and awe phase. For Tesla, the stakes are high to get it right.The EV maker's long-awaited autonomous ride-hailing service is expected to debut next month in Austin. It will join Waymo, owned by Google's parent company Alphabet, which is already entrenched in San Francisco and expanding into other cities.My BI colleagues Lloyd Lee and Alistair Barr tried to see which company offers the better self-driving experience: Tesla or Waymo. They test drove both, expecting the results of their not-so-scientific test to come down to minute details..The results surprised them.While the rides were mostly similar, the differentiator was Tesla running a red light at a complex intersection. It was an error too big to overlook. Waymo won the test.Lloyd and Alistair's story ricocheted around the internet and social media. On Tuesday, CNBC's David Faber pressed Tesla CEO Elon Musk about it, particularly the Tesla running a red light.Musk didn't address specific details in BI's reporting. Instead, he said Tesla's robotaxis will be "geo-fenced" — meaning they will avoid some intersections and certain parts of Austin.Waymo already uses geo-fencing. Its car avoided the intersection where the Tesla ran the red light, instead taking a route that was farther away and less time-efficient but perhaps safer to navigate, according to the BI story.Tesla's robotaxi plans come at a critical time for a brand that's taken a hit from Musk's work with the Trump administration. Overseas competition is also ramping up, and prices for used Teslas, including Cybertrucks, are falling.The excitement around the robotaxis is helping, though. Tesla's stock has risen about 40% since Musk talked up the robotaxi last month and signaled he was re-committing to Tesla and stepping back from DOGE.We'll stay all over this coverage for you, including the big debut.The new millennial home dilemmaMillennials are set to benefit from a massive wealth transfer from their boomer parents, most of which is held up in real estate.But because boomers tend to stay in their homes for decades, many children will inherit properties in need of some serious TLC.Microsoft's "age of AI agents"

    Microsoft

    CEO Satya Nadella recently tapped Jay Parikh, formerly Facebook's global head of engineering, to spearhead Microsoft's new AI unit, CoreAI. BI viewed internal memos to get a glimpse of Parikh's vision and progress.Parikh is focusing on cultural shifts, operational improvements, and customer experience as he leads Microsoft into a new era.He has plans for an AI "agent factory."From PowerPoint to plumbing

    Peter Dazeley/Getty Images

    AI is decimating jobs, and the cost of college is ever-rising. Gen Zers are losing faith in the ROI of a degree, but they've got another option: the trades.White-collar jobs are stagnating, but fields like plumbing, construction, and electrical work are projected to grow. Blue-collar jobs offer a work-life balance and a path to becoming your own boss.The shaky bond market

    Mario Tama/Getty Images

    Bonds have always been viewed as a safe haven, especially ones backed by the US government. But concerns over the growing deficit are changing investors' perspective on the asset.KKR has cast doubt over bonds, and JPMorgan CEO Jamie Dimon has been vocal about US credit being a "bad risk." Here's what investors have to think about amid the turmoil.Also read:This week's quote:"But if you want one of these jobs, you've got to play the game."— A recent graduate who moved to New York City early to be in a good position for the private-equity recruiting process.More of this week's top reads:Duolingo drama underscores the new corporate balancing act on AI hype.Elon Musk went on a media blitz. Here are five takeaways from his interviews.See inside the luxurious Boeing 747 Qatar is giving to Trump to serve as Air Force One.Instagram head Adam Mosseri on the "paradigm shift" from posting in public to sharing in private.Four reasons Walmart is raising prices and Home Depot isn't.Please, Jony Ive, I beg you not to make a voice device.Meet the Yale student and hacker moonlighting as a cybersecurity watchdog.Inside the little-known perks that come from a stock exchange "bake-off."Why these Americans agree with the DOGE firings: "Welcome to the real world."The BI Today team: Dan DeFrancesco, deputy editor and anchor, in New York. Grace Lett, editor, in Chicago. Amanda Yen, associate editor, in New York. Lisa Ryan, executive editor, in New York. Elizabeth Casolo, fellow, in Chicago.
    #tesla #going #all #finish #first
    Tesla is going all in to finish first in the robotaxi race
    Lloyd Lee/BI 2025-05-25T10:37:01Z d Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? This post originally appeared in the BI Today newsletter. You can sign up for Business Insider's daily newsletter here. Welcome back to our Sunday edition, where we round up some of our top stories and take you inside our newsroom. This week, BI's Polly Thompson took an inside look at how artificial intelligence is set to upend a pillar of the white-collar world: the Big Four.On the agenda today:Many millennials face a cursed inheritance with their parents' homes.Internal memos reveal how an ex-Facebook exec leads Microsoft's new AI unit.Losing faith in the ROI of college, Gen Z is pivoting to blue-collar jobs.Wall Street bigwigs are questioning the safety of government bonds. Now what?But first: Tesla's robotaxis are taking the wheel.If this was forwarded to you, sign up here. Download Business Insider's app here.This week's dispatch Robin Marchant/Getty, Sean Gallup/Getty, Tyler Le/BI Tesla's big betI remain in awe of self-driving cars.I took my first Waymo earlier this year in San Francisco. Like any newbie, I immediately pulled out my phone, recorded the ride, and then gleefully shared videos with friends and family.The market for robotaxis is well beyond the shock and awe phase. For Tesla, the stakes are high to get it right.The EV maker's long-awaited autonomous ride-hailing service is expected to debut next month in Austin. It will join Waymo, owned by Google's parent company Alphabet, which is already entrenched in San Francisco and expanding into other cities.My BI colleagues Lloyd Lee and Alistair Barr tried to see which company offers the better self-driving experience: Tesla or Waymo. They test drove both, expecting the results of their not-so-scientific test to come down to minute details..The results surprised them.While the rides were mostly similar, the differentiator was Tesla running a red light at a complex intersection. It was an error too big to overlook. Waymo won the test.Lloyd and Alistair's story ricocheted around the internet and social media. On Tuesday, CNBC's David Faber pressed Tesla CEO Elon Musk about it, particularly the Tesla running a red light.Musk didn't address specific details in BI's reporting. Instead, he said Tesla's robotaxis will be "geo-fenced" — meaning they will avoid some intersections and certain parts of Austin.Waymo already uses geo-fencing. Its car avoided the intersection where the Tesla ran the red light, instead taking a route that was farther away and less time-efficient but perhaps safer to navigate, according to the BI story.Tesla's robotaxi plans come at a critical time for a brand that's taken a hit from Musk's work with the Trump administration. Overseas competition is also ramping up, and prices for used Teslas, including Cybertrucks, are falling.The excitement around the robotaxis is helping, though. Tesla's stock has risen about 40% since Musk talked up the robotaxi last month and signaled he was re-committing to Tesla and stepping back from DOGE.We'll stay all over this coverage for you, including the big debut.The new millennial home dilemmaMillennials are set to benefit from a massive wealth transfer from their boomer parents, most of which is held up in real estate.But because boomers tend to stay in their homes for decades, many children will inherit properties in need of some serious TLC.Microsoft's "age of AI agents" Microsoft CEO Satya Nadella recently tapped Jay Parikh, formerly Facebook's global head of engineering, to spearhead Microsoft's new AI unit, CoreAI. BI viewed internal memos to get a glimpse of Parikh's vision and progress.Parikh is focusing on cultural shifts, operational improvements, and customer experience as he leads Microsoft into a new era.He has plans for an AI "agent factory."From PowerPoint to plumbing Peter Dazeley/Getty Images AI is decimating jobs, and the cost of college is ever-rising. Gen Zers are losing faith in the ROI of a degree, but they've got another option: the trades.White-collar jobs are stagnating, but fields like plumbing, construction, and electrical work are projected to grow. Blue-collar jobs offer a work-life balance and a path to becoming your own boss.The shaky bond market Mario Tama/Getty Images Bonds have always been viewed as a safe haven, especially ones backed by the US government. But concerns over the growing deficit are changing investors' perspective on the asset.KKR has cast doubt over bonds, and JPMorgan CEO Jamie Dimon has been vocal about US credit being a "bad risk." Here's what investors have to think about amid the turmoil.Also read:This week's quote:"But if you want one of these jobs, you've got to play the game."— A recent graduate who moved to New York City early to be in a good position for the private-equity recruiting process.More of this week's top reads:Duolingo drama underscores the new corporate balancing act on AI hype.Elon Musk went on a media blitz. Here are five takeaways from his interviews.See inside the luxurious Boeing 747 Qatar is giving to Trump to serve as Air Force One.Instagram head Adam Mosseri on the "paradigm shift" from posting in public to sharing in private.Four reasons Walmart is raising prices and Home Depot isn't.Please, Jony Ive, I beg you not to make a voice device.Meet the Yale student and hacker moonlighting as a cybersecurity watchdog.Inside the little-known perks that come from a stock exchange "bake-off."Why these Americans agree with the DOGE firings: "Welcome to the real world."The BI Today team: Dan DeFrancesco, deputy editor and anchor, in New York. Grace Lett, editor, in Chicago. Amanda Yen, associate editor, in New York. Lisa Ryan, executive editor, in New York. Elizabeth Casolo, fellow, in Chicago. #tesla #going #all #finish #first
    WWW.BUSINESSINSIDER.COM
    Tesla is going all in to finish first in the robotaxi race
    Lloyd Lee/BI 2025-05-25T10:37:01Z Save Saved Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? This post originally appeared in the BI Today newsletter. You can sign up for Business Insider's daily newsletter here. Welcome back to our Sunday edition, where we round up some of our top stories and take you inside our newsroom. This week, BI's Polly Thompson took an inside look at how artificial intelligence is set to upend a pillar of the white-collar world: the Big Four.On the agenda today:Many millennials face a cursed inheritance with their parents' homes.Internal memos reveal how an ex-Facebook exec leads Microsoft's new AI unit.Losing faith in the ROI of college, Gen Z is pivoting to blue-collar jobs.Wall Street bigwigs are questioning the safety of government bonds. Now what?But first: Tesla's robotaxis are taking the wheel.If this was forwarded to you, sign up here. Download Business Insider's app here.This week's dispatch Robin Marchant/Getty, Sean Gallup/Getty, Tyler Le/BI Tesla's big betI remain in awe of self-driving cars.I took my first Waymo earlier this year in San Francisco. Like any newbie, I immediately pulled out my phone, recorded the ride, and then gleefully shared videos with friends and family.The market for robotaxis is well beyond the shock and awe phase. For Tesla, the stakes are high to get it right.The EV maker's long-awaited autonomous ride-hailing service is expected to debut next month in Austin. It will join Waymo, owned by Google's parent company Alphabet, which is already entrenched in San Francisco and expanding into other cities.My BI colleagues Lloyd Lee and Alistair Barr tried to see which company offers the better self-driving experience: Tesla or Waymo. They test drove both, expecting the results of their not-so-scientific test to come down to minute details. (They couldn't compare the robotaxi services because Tesla hasn't launched its yet).The results surprised them.While the rides were mostly similar, the differentiator was Tesla running a red light at a complex intersection. It was an error too big to overlook. Waymo won the test.Lloyd and Alistair's story ricocheted around the internet and social media. On Tuesday, CNBC's David Faber pressed Tesla CEO Elon Musk about it, particularly the Tesla running a red light.Musk didn't address specific details in BI's reporting. Instead, he said Tesla's robotaxis will be "geo-fenced" — meaning they will avoid some intersections and certain parts of Austin.Waymo already uses geo-fencing. Its car avoided the intersection where the Tesla ran the red light, instead taking a route that was farther away and less time-efficient but perhaps safer to navigate, according to the BI story.Tesla's robotaxi plans come at a critical time for a brand that's taken a hit from Musk's work with the Trump administration. Overseas competition is also ramping up, and prices for used Teslas, including Cybertrucks, are falling.The excitement around the robotaxis is helping, though. Tesla's stock has risen about 40% since Musk talked up the robotaxi last month and signaled he was re-committing to Tesla and stepping back from DOGE.We'll stay all over this coverage for you, including the big debut.The new millennial home dilemmaMillennials are set to benefit from a massive wealth transfer from their boomer parents, most of which is held up in real estate.But because boomers tend to stay in their homes for decades, many children will inherit properties in need of some serious TLC.Microsoft's "age of AI agents" Microsoft CEO Satya Nadella recently tapped Jay Parikh, formerly Facebook's global head of engineering, to spearhead Microsoft's new AI unit, CoreAI. BI viewed internal memos to get a glimpse of Parikh's vision and progress.Parikh is focusing on cultural shifts, operational improvements, and customer experience as he leads Microsoft into a new era.He has plans for an AI "agent factory."From PowerPoint to plumbing Peter Dazeley/Getty Images AI is decimating jobs, and the cost of college is ever-rising. Gen Zers are losing faith in the ROI of a degree, but they've got another option: the trades.White-collar jobs are stagnating, but fields like plumbing, construction, and electrical work are projected to grow. Blue-collar jobs offer a work-life balance and a path to becoming your own boss.The shaky bond market Mario Tama/Getty Images Bonds have always been viewed as a safe haven, especially ones backed by the US government. But concerns over the growing deficit are changing investors' perspective on the asset.KKR has cast doubt over bonds, and JPMorgan CEO Jamie Dimon has been vocal about US credit being a "bad risk." Here's what investors have to think about amid the turmoil.Also read:This week's quote:"But if you want one of these jobs, you've got to play the game."— A recent graduate who moved to New York City early to be in a good position for the private-equity recruiting process.More of this week's top reads:Duolingo drama underscores the new corporate balancing act on AI hype.Elon Musk went on a media blitz. Here are five takeaways from his interviews.See inside the luxurious Boeing 747 Qatar is giving to Trump to serve as Air Force One.Instagram head Adam Mosseri on the "paradigm shift" from posting in public to sharing in private.Four reasons Walmart is raising prices and Home Depot isn't.Please, Jony Ive, I beg you not to make a voice device.Meet the Yale student and hacker moonlighting as a cybersecurity watchdog.Inside the little-known perks that come from a stock exchange "bake-off."Why these Americans agree with the DOGE firings: "Welcome to the real world."The BI Today team: Dan DeFrancesco, deputy editor and anchor, in New York. Grace Lett, editor, in Chicago. Amanda Yen, associate editor, in New York. Lisa Ryan, executive editor, in New York. Elizabeth Casolo, fellow, in Chicago.
    0 Commentarii 0 Distribuiri 0 previzualizare
Sponsorizeaza Paginile
CGShares https://cgshares.com